Consensus Based-Optimization
Consensus-based optimization (CBO) is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. In fact, optimizing agents (particles) move on the optimization domain driven by a drift towards an instantaneous consensus point, which is computed as a convex combination of particle locations, weighted by the cost function according to Laplace’s principle, and it represents an approximation to a global minimizer. The dynamics is further perturbed by a random vector field to favor exploration, whose variance is a function of the distance of the particles to the consensus point. In particular, as soon as the consensus is reached the stochastic component vanishes. Based on an experimentally supported intuition that CBO always performs a gradient descent of the squared Euclidean distance to the global minimizer, we show a novel technique for proving the global convergence to the global minimizer for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we present the proof that CBO performs a convexification of a very large class of optimization problems as the number of optimizing agents goes to infinity. We further present formulations of CBO over compact hypersurfaces and the proof of convergence to global minimizers for nonconvex nonsmooth optimizations on the hypersphere. We conclude the talk with several numerical experiments, which show that CBO scales well with the dimension and is extremely versatile. To quantify the performances of such a novel approach, we show that CBO is able to perform essentially as good as ad hoc state of the art methods using higher order information in challenging problems in signal processing and machine learning, namely the training of neural networks,the phase retrieval problem, and the robust subspace detection.