Description: 实现共轭梯度法的实例,该算法是一种优化算法,其具体优越性相信用者自知!-achieve conjugate gradient method example, the algorithm is an optimization algorithm, the specific advantages of knowing who to believe! Platform: |
Size: 364544 |
Author:鲤鱼 |
Hits:
Description: 优化问题中的直接搜索法,相比其他方法,有不需要梯度就可以的算-Optimization problem in the direct search method, compared with other methods, the gradient can have no need for the operator Platform: |
Size: 855040 |
Author:wg |
Hits:
Description: optimisation中的梯度法,在实验课上编写的程序,可以直接看到效果-optimization of the gradient method, in the experimental procedure for the preparation of class, you can directly see the effect of Platform: |
Size: 1024 |
Author:bellepdt |
Hits:
Description: 一个介绍最优化方法课件,很详细。从最简单的一维搜索到共轭梯度,最速下降法等等,都有说明-An optimization method introduction courseware, in great detail. From the simplest one-dimensional search to the conjugate gradient, steepest descent method, etc., all indicate Platform: |
Size: 59392 |
Author:lgl |
Hits:
Description: 约束最优化方法--最速下降法(也叫梯度法),是人们用来求多个变量函数极值问题的最早的一种方法。-Constrained optimization methods- steepest descent method (also known as gradient method), is used for multiple variables function Extremum Problems earliest methods. Platform: |
Size: 2048 |
Author:anytry |
Hits:
Description: 共轭梯度算法C程序,本程序适用于n设计变量的函数优化问题,对于不同的设计变量个数可以改变维数,该算法程序只在主函数中与其他无约束程序有差别,其他部分基本一样。-Conjugate gradient algorithm C procedures, the procedures for n design variables of the function optimization problem, the number of different design variables can change the dimension, the algorithm is only in the main function and other non-binding procedures are different from other parts of the basic the same. Platform: |
Size: 2048 |
Author:AresOne |
Hits:
Description: 压缩包里包含了无约束优化问题常用的几种求解方法的源程序:变量轮换法(variable_rotation.m)、最速下降法(steepest_descent.m)、修正牛顿法(modified_newton.m)、共轭梯度法(conjugate_gradient.m)。另外,coefficient_matrix.m为目标函数系数获得矩阵,minval.m为最小值计算函数,gradient.m为梯度计算函数-Compression bag contains unconstrained optimization problems of several commonly used method of source: variable rotation Act (variable_rotation.m), steepest descent method (steepest_descent.m), as amended Newton (modified_newton.m), conjugate gradient method (conjugate_gradient.m). In addition, coefficient_matrix.m as the objective function coefficients obtained matrix, minval.m function for the minimum calculation, gradient.m for gradient calculation function-Compression bag contains unconstrained optimization problems of several commonly used method of source: variable rotation Act (variable_rotation.m), steepest descent method (steepest_descent.m), as amended Newton (modified_newton.m), conjugate gradient method (conjugate_gradient.m). In addition, coefficient_matrix.m as the objective function coefficients obtained matrix, minval.m function for the minimum calculation, gradient.m for gradient calculation function Platform: |
Size: 1024 |
Author:zhuyuanli |
Hits:
Description: 通过使用matlab求解共轭梯度法和牛顿法。熟悉经典优化方法。(Solve the conjugate gradient method and the Newton method by using matlab. Familiar with the classic optimization method.) Platform: |
Size: 1024 |
Author:亮看世界
|
Hits:
Description: 基于Armijo-Goldstein准则的用matlab实现的共轭梯度优化方法,个人编写,适合优化方法入门练习。(Based on the Armijo-Goldstein criterion, the conjugate gradient optimization method implemented by Matlab is written by individual, suitable for the introduction of the optimization method) Platform: |
Size: 1024 |
Author:Victor_Z |
Hits:
Description: 遗传算法提供了求解非线性规划的通用框架,它不依赖于问题的具体领域。遗传算法的优点是将问题参数编码成染色体后进行优化, 而不针对参数本身, 从而不受函数约束条件的限制; 搜索过程从问题解的一个集合开始, 而不是单个个体, 具有隐含并行搜索特性, 可大大减少陷入局部最小的可能性。而且优化计算时算法不依赖于梯度信息,且不要求目标函数连续及可导,使其适于求解传统搜索方法难以解决的大规模、非线性组合优化问题。(Genetic algorithm provides a general framework for solving nonlinear programming, which does not depend on the specific problem domain. The advantage of genetic algorithm is that the problem parameters are encoded into chromosomes for optimization, rather than the parameters themselves. The search process starts from a set of problem solutions, rather than a single individual, and has the implicit parallel search feature, which can greatly reduce the possibility of falling into the local minimum. Moreover, the algorithm does not rely on gradient information and does not require the objective function to be continuous and differentiable, which makes it suitable for solving large-scale and nonlinear combinatorial optimization problems that are difficult to be solved by traditional search methods.) Platform: |
Size: 33792 |
Author:FZenjoys |
Hits: