Introduction - If you have any usage issues, please Google them yourself
The quasi-Newton method and the Steepest Descent Methods only require that each step iterations know the gradient of the objective function. By measuring the change of the gradient, constructing a model of the objective function is sufficient to produce superlinear convergence. This method is much better than the steepest descent method, especially for difficult problems. In addition, because the quasi-Newton method does not require information on the second derivative, it is sometimes more effective than the Newton s Method. Today, the optimization software contains a large number of quasi-Newton algorithm used to solve the unconstrained, constraint, and large-scale optimization problems.