This repository contains Jupyter Notebooks implementing and analyzing numerical algorithms to find the global minimum of a non-convex, oscillatory mathematical function. The project focuses on comparing the computational performance, convergence rates, and stability of two classical iterative methods: Gradient Descent and Newton's Method.
The objective is to minimize the following multivariable function:
Function Properties:
-
Oscillatory Nature: The trigonometric term
$\frac{1}{2 + \cos(x + y)}$ creates a landscape fraught with local minima and ridges, challenging the algorithms' ability to find the global optimum without getting stuck. -
Coercivity: The quadratic expression
$\frac{(x - y)^2 + (x + y)^2}{20}$ ensures strict coercivity. As the variables$x$ and$y$ grow toward infinity, the function value also approaches infinity, acting as a "funnel" that guarantees the existence of a global minimum near the origin.
-
Flow: Uses first-order information (the gradient vector $g = \nabla f(x)$) to descend the function's surface. The position is updated iteratively by moving in the opposite direction of the gradient, scaled by a fixed learning rate (step size)
$\alpha = 0.1$ . -
Stopping Criterion: The algorithm halts when the Euclidean norm of the spatial step falls below the tolerance threshold:
$|x_{new} - x| < \epsilon$ (where$\epsilon = 10^{-6}$ ).
-
Flow: Incorporates second-order information by evaluating both the gradient
$g$ and the Hessian matrix$H = \nabla^2 f(x)$ . -
Competitive Performance Insight: Instead of directly inverting the Hessian matrix (
np.linalg.inv), the differential step$\Delta$ is obtained by solving the linear system$H \cdot \Delta = g$ . This approach drastically minimizes floating-point error propagation, improves numerical stability, and decreases overall computational cycles—a critical optimization practice in numerical analysis. -
Stopping Criterion: Identical to Gradient Descent (
$|x_{new} - x| < \epsilon$ ).
Below is a breakdown of the core updates per iteration, emphasizing stability over naive math translation:
# Gradient Descent State Update
x_new = x - alpha * g
# Newton's Method State Update (Geometrically optimized step)
delta = np.linalg.solve(H, g) # Efficiently solves H * delta = g
x_new = x - deltaBoth algorithms are tested under identical conditions to measure resilience:
-
Inputs: A variety of starting coordinates
$x_0$ (e.g.,[-50, 5],[5, -2]), tolerance ($\epsilon = 10^{-6}$ ), and an iteration limit. -
Outputs Generated:
-
minimo: The final optimal spatial coordinates $[x^, y^]$. -
valor_minimo: The optimized function value evaluated at the minima. -
iteraciones: Total structural iterations needed to break the tolerance threshold. -
trayectoria: An accumulated array list tracing the step-by-step Cartesian movement across the grid space, suitable fornp.meshgridplotting.
-
A comparative data-driven analysis is handled using Pandas DataFrames at the end of the execution. This explicitly demonstrates the mathematical efficiency leap:
- Quadratic Convergence: Newton's method frequently achieves convergence with orders of magnitude fewer iterations.
- Linear/Sub-linear convergence: Gradient descent struggles in flatter valleys, requiring significantly more iterations to trigger the halting criteria.
To run the notebooks locally, ensure you have a Python environment set up with Jupyter and the required analytical backbones:
pip install numpy pandas matplotlib jupyterSimply launch jupyter notebook in the repository root and open Algoritmos.ipynb or optimization.ipynb.