Potential Hessian Ascent: The Sherrington-Kirkpatrick Model

Authors: David Jekel, Juspreet Singh Sandhu, Jonathan Shi

102 pages, 1 table
License: CC BY 4.0

Abstract: We provide the first iterative spectral algorithm to find near-optima of a random quadratic objective over the discrete hypercube. The algorithm is a randomized Hessian ascent in the solid cube, where we modify the objective by subtracting a specific instance-independent potential function [Chen et al., Communications on Pure and Applied Mathematics, 76(7), 2023]. This extends Subag's algorithmic program of Hessian ascent from the sphere [Subag, Communications on Pure and Applied Mathematics, 74(5), 2021] to the more complex geometry of the cube. Utilizing tools from free probability theory, we construct an approximate projector into the top-eigenspaces of the Hessian with well-behaved diagonal entries, and use it as the covariance matrix for the random increments. With high probability, the empirical distribution of the iterates approximates the solution to the primal version of the Auffinger-Chen SDE [Auffinger et al., Communications in Mathematical Physics, 335, 2015]. We then bound the change to the modified objective function for every iterate via a Taylor expansion whose derivatives are controlled using various Gaussian concentration bounds and smoothness properties of (a semiconcave regularization of) the Fenchel-Legendre dual to the solution of the Parisi PDE. These results lay the groundwork for demonstrating the (possible) existence of low-degree sum-of-squares certificates over high-entropy step distributions for a relaxed version of the Parisi formula [Open Question 1.8, arXiv:2401.14383].

Submitted to arXiv on 05 Aug. 2024

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.