A NEW METHOD FOR GLOBAL OPTIMIZATION

This paper presents a new method for global optimization. We use exact quadratic regularization for the transformation of the multimodal problems to a problem of a maximum norm vector on a convex set. Quadratic regularization often allows you to convert a multimodal problem into a unimodal problem. For this, we use the shift of the feasible region along the bisector of the positive orthant. We use only local search (primal-dual interior point method) and a dichotomy method for search of a global extremum in the multimodal problems. The comparative numerical experiments have shown that this method is very efficient and promising. Résumé. Cet article présente une nouvelle méthode d’optimisation globale. Nous utilisons la régularisation quadratique exacte pour transformer des problémes multimodaux en un probléme vectoriel maximal sur un ensemble convexe. La régularisation quadratique vous permet souvent de convertir un probléme multimodal en un probléme unimodal. Pour cela, nous utilisons le décalage de la région admissible le long de la bissectrice de l’orthant positif. Nous utilisons uniquement la recherche locale (la méthode du point interne double direct) et la méthode de dichotomie pour rechercher l’extremum global dans les problémes multimodaux. Des expériences numériques comparatives ont montré que cette méthode est trés efficace et prometteuse. Introduction Global optimization has ubiquitous applications in economy, finance, project optimization, planning, computer graphics, management, scheduling, informatics, and other engineering and applied sciences. These difficult systems can be transformed into multimodal optimization problems infinite-dimensional space. Such problems contain many local minima and belong to NP-difficult class. It is necessary to develop new methods of global optimization for the solution of these problems. In the past 30 years, we have observed considerable efforts of many researchers for the solution of global optimization problems [1,5,7,16]. Initially, hopes were placed on the branch and bound method [1]. However, this method can only be used for the problems of small dimensional. Then began to develop stochastic methods [2]. Sometimes these methods allow you to find solutions close to optimal in the test problems. However, in many cases, these solutions are far from optimal. Semidefinite optimization methods use convex relaxation to solve general quadratic and polynomial problems [3]. Convex relaxation is a progressive idea, but it allows us to find only estimates of the solutions. We propose an exact quadratic regularization method that uses exactly the transformations. This method uses the transformation of the multimodal problems to the maximum norm vector on a convex set. We show by the examples when, after such a transformation, the multimodal problem becomes unimodal. In the general case, the problem of the maximum norm vector on a convex set is multimodal. But for some convex regions, such a problem reduces to a unimodal one. For example, when a convex set is a rectangular parallelepiped, a regular 1 University of Chemical Engineering, Ukraine, anivkos@ua.fm © The authors. Published by EDP Sciences, SMAI 2021 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Article published online by EDP Sciences and available at https://www.esaim-proc.org or https://doi.org/10.1051/proc/202171121 122 ESAIM: PROCEEDINGS AND SURVEYS polyhedron or the curvature of its convex surface is greater than the curvature of the ball at each point. We sometimes observe this when a feasible region is shifted along the bisector of the positive orthant. We introduce two parameters and two new variables when transforming the multimodal problems. These parameters are easy to find especially for general quadratic problems. We find one new variable using the dichotomy method. The minimum value of this variable is determined from the solution of a convex problem. Next, for each value of a new variable, the transformed problem is solved by the primal-dual interior point method [4]. If the solution to the problem reaches the surface of the ball with the minimum value of a new variable, then a solution to the multimodal problem is found. We also use the shift of a feasible region of the problem to convert the multimodal problem into a unimodal problem. The method EQR uses the results of convex analysis. This is also true for the methods of concave, reverseconvex, DC optimization [16]. The advantage of the method EQR is that we can use it to solve general nonlinear optimization problems, while quadratic regularization allows us to get a simpler transformed problem. We have solved more than 350 known difficult test multimodal problems by the method of exact quadratic regularization [15]. Comparative numerical experiments showed the advantage of this method. This method uses only local search and dichotomy which allows us to solve multimodal problems with 1000 variables or more. 1. Problems of global optimization A mathematical optimization problem has the form min{f0(x)|fi(x) ≤ 0, i = 1, ...,m, x ∈ E}, (1) where all functions fi(x) are twice continuously differentiable, x is a vector in n-dimensional Euclidean space E. Let the solution of a problem (1) exist, its feasible region is bounded and x∗ – the point of global minimum (1). The constraints of problem (1) may contain the equalities fi(x) = 0, i = 1, ...,m then they are equivalent to the following inequalities fi(x) ≤ 0, i = 1, ...,m,− n ∑


Introduction
Global optimization has ubiquitous applications in economy, finance, project optimization, planning, computer graphics, management, scheduling, informatics, and other engineering and applied sciences. These difficult systems can be transformed into multimodal optimization problems infinite-dimensional space. Such problems contain many local minima and belong to NP-difficult class. It is necessary to develop new methods of global optimization for the solution of these problems.
In the past 30 years, we have observed considerable efforts of many researchers for the solution of global optimization problems [1,5,7,16]. Initially, hopes were placed on the branch and bound method [1]. However, this method can only be used for the problems of small dimensional. Then began to develop stochastic methods [2]. Sometimes these methods allow you to find solutions close to optimal in the test problems. However, in many cases, these solutions are far from optimal. Semidefinite optimization methods use convex relaxation to solve general quadratic and polynomial problems [3]. Convex relaxation is a progressive idea, but it allows us to find only estimates of the solutions.
We propose an exact quadratic regularization method that uses exactly the transformations. This method uses the transformation of the multimodal problems to the maximum norm vector on a convex set. We show by the examples when, after such a transformation, the multimodal problem becomes unimodal. In the general case, the problem of the maximum norm vector on a convex set is multimodal. But for some convex regions, such a problem reduces to a unimodal one. For example, when a convex set is a rectangular parallelepiped, a regular polyhedron or the curvature of its convex surface is greater than the curvature of the ball at each point. We sometimes observe this when a feasible region is shifted along the bisector of the positive orthant. We introduce two parameters and two new variables when transforming the multimodal problems. These parameters are easy to find especially for general quadratic problems. We find one new variable using the dichotomy method. The minimum value of this variable is determined from the solution of a convex problem. Next, for each value of a new variable, the transformed problem is solved by the primal-dual interior point method [4]. If the solution to the problem reaches the surface of the ball with the minimum value of a new variable, then a solution to the multimodal problem is found. We also use the shift of a feasible region of the problem to convert the multimodal problem into a unimodal problem.
The method EQR uses the results of convex analysis. This is also true for the methods of concave, reverseconvex, DC optimization [16]. The advantage of the method EQR is that we can use it to solve general nonlinear optimization problems, while quadratic regularization allows us to get a simpler transformed problem.
We have solved more than 350 known difficult test multimodal problems by the method of exact quadratic regularization [15]. Comparative numerical experiments showed the advantage of this method. This method uses only local search and dichotomy which allows us to solve multimodal problems with 1000 variables or more.

Problems of global optimization
A mathematical optimization problem has the form where all functions f i (x) are twice continuously differentiable, x is a vector in n-dimensional Euclidean space E n . Let the solution of a problem (1) exist, its feasible region is bounded and x * -the point of global minimum (1). The constraints of problem (1) may contain the equalities f i (x) = 0, i = 1, ..., m then they are equivalent to the following inequalities The variables of problem (1) can be only integers then it is equivalent to a constraint n i=1 (1 − cos(2πx i )) ≤ 0 or only Boolean, then we add the constraints If the variables accept only discrete values x ∈ {z 1 , ..., z m }, z ≥ 0 where z i = z j , ∀i = j, then the constraints will be Thus all classes of the problems finite-dimensional of optimization will be transformed to the problem (1). The solution of a problem (1) will be found by a local method if we successfully choose an initial point. We use the primal-dual interior point method [4] for a search of a global minimum in the problem (1).

Exact quadratic regularization
Quadratic regularization allows you to convert the nonconvex functions and regions into convex. For example the nonconvex function 1 − cos(2πx i ) after adding the quadratic term 40||x|| 2 becomes convex. The nonconvex region after quadratic regularization becomes the convex region for any d.
We transform the problem (1) to the following where z = (x 1 , ..., x n , x n+1 ). The value s is chosen so that f 0 (x * ) + s ≥ ||x * || 2 . We call such a quadratic regularization exact, since the problem (5) is equivalent to the problem (1).

Lemma 1.
If the condition f 0 (x) + s ≥ ||x|| 2 holds, then the first constraint of the problem (2) will be active.
Proof. After transforming the first inequality f 0 (x) + s ≤ ||z|| 2 to the form f 0 (x) + s − ||x|| 2 ≤ x 2 n+1 we see that the left side of this inequality is positive and the value x 2 n+1 should be minimal. It will be when the equality f 0 (x) + s − ||x|| 2 = x 2 n+1 is true. Thus the problem (1) transformed to the minimization norm of a square vector. The value r > 0 exists so that all functions f i (x)+r||z|| 2 are convex on the bounded feasible region of the problem (1). It follows from the fact that Hessians of these functions are positively defined matrixes (matrixes with a dominant main diagonal). We use quadratic regularization to transform the problem (1) into the following (3) If we replace equality r||z|| 2 = d by inequality r||z|| 2 ≤ d then this problem becomes convex Thus, if in the problem (4) the last constraint is active at a minimum point, then its solution coincides with the solution of the problem (1). We show that quadratic regularization can transform a multimodal problem into a unimodal problem. The multimodal problem have 3 local minimums. But the convex problem ≤ d} has only one local minimum that coincides with the global minimum of the original problem.
In general, the last constraint of the problem (4) at the minimum point will not be active. Then its solution will not coincide with the solution of the problem (1). In the problem (3), we search a minimal radius d for a ball r||z|| 2 = d containing a feasible convex region. We will show that problem (3) is equivalent to the following if r||z|| 2 = d. The problem (5) contains two new parameters s, r and two new variables x n+1 , d. We need to find the value of a variable z that maximizes its squared norm on the convex feasible region for the given parameters s, r and fixed values of the variable d.
Proof. Since the first constraint of the problem (5) (see Lemma 1) is active then we have For the solutions to the problem (5) the conditions r||z 0 || 2 = d 0 and r||z * || 2 = d * hold then we obtain The following theorem states the equivalence of the problems (1) and (5). We use the notation B = {z|r||z|| 2 ≤ d} and S is feasible region of the problem (5).
Theorem 2. Let (z * , d * ) be a solution to the problem (5) for which the conditions r||z * || 2 = d * and S ⊆ B hold, then x * is the global minimum point of the problem (1).
will be the solution of the problem (5). Let us show that condition S ⊆ B for this solution hold. Assume the contrary, that there exists the point x 0 ∈ S and x 0 / ∈ B which means ||z 0 || 2 > ||z * || 2 . Then from the conditions . In this inequality, there is a positive value on the left and a negative value on the right, which is not possible. So we have S ⊆ B. Let x 0 be a local minimum point of the problem (1) and (z 0 , d 0 ) the corresponding solution to the problem (5). Let us show that in this case x * / ∈ B = {z|r||z|| 2 = d 0 } it follows that d * > d 0 . Assume the contrary, that z * ∈ B then the equalities Subtracting the first equality from the second, we obtain We write this inequality in the form where After substituting the values of x n+1 into inequality (6) and reducing similar ones, we obtain the following inequality which is a contradiction. Thus, if the local minimum point lies on the boundary of the set B, then the global minimum point x * / ∈ B.
In problem (5), it is necessary to find the minimum value of the scalar d for which the condition r||z|| 2 = d holds. We start in the problem (5) with minimal value d 0 (solution to the problem (4)) and increase d until equality r||z|| 2 = d is reached.
We show that problem (5) can be unimodal. Consider the problem We will transform this problem into the following problem where s = 10, r = 3 and d = 33.4. This problem have only one local maximum.
In the general case, the problem (5) will be multimodal. We use a modified interior point method to solve the problem (5). Let z k be the point of a local maximum of the problem (5). We solve the sequence of convex problems max{(p j ) T z|f 0 (x) + s + (r − 1)||z|| 2 ≤ d, f i (x) + r||z|| 2 ≤ d, i = 1, ..., m}, j = 1, ..., n, where Then z k is the point of global maximum of the problem (5), if z k is the solution of problems (7) for all j. In this case, the set S ⊆ B and it follows from Theorem 2 that z k is a solution to problem (5). We include in the interior point method the solutions to problems (7).

Method of exact quadratic regularization
The solution of problem (4) will be a feasible point for the problem (5). Let d 1 , . . . , d k be an increasing sequence of a values of d. Then the solution of the problem (5) with d = d i will be feasible for the problem (5) with d = d i+1 . We solve the sequence of the problems (5) for increasing values of the variable d by modified primal-dual interior point of method, then we obtain the central path to the surface of the ball r||z|| 2 = d.
The essence of the method in solving the sequence of the problems (5) for fixed values of the variable d. We find the minimum value of the variable d for which the condition r||z|| 2 = d holds. We illustrate this method by the following example This multimodal problem has 4 local minima. After quadratic regularization, we obtain the problem We solve the corresponding convex optimization problem (4)  Let f 0 (x) + s + (r − 1)||z|| 2 = d and x n+1 = x n+1 + q(q > 0) then f 0 (x) + s + (r − 1)||z|| 2 > d and the return to the feasible region will be achieved by decreasing the value f 0 (x). We choose a value of the parameter s so that the constraint f 0 (x) + s + (r − 1)||z|| 2 = d will be active. Otherwise, we increase the value of the parameter s until the first constrained is active.
We will show how the algorithm works with a simple example This problem has two local minima x = 1 and x = 4 is the global minimum. We choose values of the parameters s = 40, r = 20 and transform the problem into the form (5) Let us solve a convex problem of the form (4) to reduce d to the value d = 480. We will get the solution x * = (4, 2.828) to the problem (9) for the value d * = 480. We see that the condition r||x * || 2 − d * = 1.9E − 08 holds at the point x * . Thus we have found the global minimum point x * = 4 for the problem (8). Note that for the local minimum point x = 1, the value d is 780.

Shift of the feasible region
We show that after shifting a convex feasible region along the bisector of a positive orthant, the transformed problem can become unimodal. Consider the problem where z 1 , z 2 ≥ 0. This problem is multimodal if the inequality holds. Otherwise, this problem will be unimodal. If the condition e T (z 2 − z 1 ) = 0(e = (1, . . . , 1)) holds, then there exists h > 0 such that the problem will be unimodal. The value h is easy to find, we have Thus, the shift of a feasible region allows one to go out from the local maximum. The problem (10) after the shift and quadratic regularization becomes equivalent to the problem Let the condition ||z 2 || 2 > ||z 1 || 2 be satisfied. In this case, the point z 2 will be a solution to the problem (10). We will show that the solution to the convex problem will be at the point z 2 . Find the difference of the first constraint at the points z 1 and z 2 It follows from Thus the minimum d in the problem (11) is achieved at the point z 2 . We have proved the following statement Lemma 2. If the condition e T (z 2 − z 1 ) = 0 is satisfied, then the problem (10) is transformed to a unimodal one through a shift of the feasible region and quadratic regularization.
Consider the multimodal problem We transform the problem (12) to an equivalent Theorem 3. There is such h > 0 for which the problem (13) will be unimodal if the conditions e T (z i − z j ) = 0, ∀i = j hold, where z i are all vertices of a polyhedron A.
Proof. It is known that a solution to the problem (12) is located at one of the vertices of the polyhedron A.
Assume the contrary, that the problem (13) is multimodal. Let z 1 , z 2 be the points of its local maxima of the problem (12) then the problem (13) will also be multimodal on the segment [z 1 , z 2 ]. But this contradicts Lemma 2. There is such h > 0 that the problem (13) will be unimodal on the segment [z 1 , z 2 ]. This contradiction proves the theorem.
Consider the following example This problem has 3 local minimums and after quadratic regularization d} this problem also has 3 local minima. But after the shift of coordinates we have We propose the following algorithm with a shift of the feasible region.
Find the point (z 0 , d 0 ) and check the condition r||z 0 || 2 = d 0 . If this condition holds, then the problem (1) is solved.
.., m} continue until r||z * || 2 = d * . If d * is the smallest possible value or B ⊆ S then the problem (1) is solved and stop.
Consider an example of the problem (12) for which the conditions of Theorem 3 are not satisfied This problem has 2 local maxima and will be multimodal for any shift of a feasible region. We use the change of coordinates x 1 = z 1 , x 2 = 2z 1 + z 2 after which this problem remains multimodal. Now the problem becomes unimodal after the shift of a feasible region on the value h = 4.

Numerical experiments
In this section, we check the practical performance of the method EQR. Each new method of global optimization is tested on known test problems. These problems have been solved for over 20 years. Of specific interest are test problems for which the global minimum point is unknown (for example, test functions Egg Holder, Rana, and others). Better new methods lead to better solutions for these test problems. These best results for many test problems confirm the effectiveness of the new method. We used the method EQR to solve more than 350 known difficult test problems (see [15]). Some results are shown in Table 1.
The method EQR allows you to get significantly better results for all problems in Table 1. We found an optimal solution for the problem Ryoo  We can continue this list of the solutions. The method EQR shows the best results for almost all difficult test problems. The method of exact quadratic regularization made it possible to find the best solutions for all test problems with unknown solutions.

Conclusion
We have substantiated the method of exact quadratic regularization and have explained how it works using simple examples. This method uses only local search and the algorithm of dichotomy for one variable. We have solved many difficult test multimodal problems and got the best results for all test problems where the exact solution is unknown. This method only needs a local optimization computer program. Therefore it allows solving multimodal large scale problems. This method is easy to use to solve any multimodal optimization problem. The comparative numerical experiments have shown that this method is very efficient.