Description: 寻找函数的全局极小值,global minimization of contrast function with random restarts the data are assumed whitened (i.e. with identity covariance matrix). The output is such that Wopt*x are the independent sources.-Find function of the minimum value of the overall situation, global minimization of contrast function with random restarts the data are assumed whitened ( Covariance ie with identity matrix). The out put is such that Wopt * x are the independent sour ces. Platform: |
Size: 1504 |
Author:李国齐 |
Hits:
Description: 寻找函数的全局极小值,global minimization of contrast function with random restarts the data are assumed whitened (i.e. with identity covariance matrix). The output is such that Wopt*x are the independent sources.-Find function of the minimum value of the overall situation, global minimization of contrast function with random restarts the data are assumed whitened ( Covariance ie with identity matrix). The out put is such that Wopt* x are the independent sour ces. Platform: |
Size: 1024 |
Author:李国齐 |
Hits:
Description: 计算梯度下降法计算极值,只能找到局部最小点。可以通过调整步长实现全局最小-Calculation of gradient descent method to calculate extreme value, can only find local minimum point. By adjusting the step size can achieve the global minimum Platform: |
Size: 1024 |
Author:宗丹 |
Hits:
Description: 根据贪心算法,来求解最小生成树;
这种情况,可能求不不全局最优解,但是,可以求出当前情况下的最优解;-According to greedy algorithm to solve the minimum spanning tree this situation, it may not seek the global optimal solution is not, however, you can find the optimal solution under the current situation Platform: |
Size: 1024 |
Author:陈超 |
Hits:
Description: 对一个三次函数寻找最优解(最小值),分别使用全局搜索法、二分法、黄金分割法和斐波那契法共四种方法寻优。函数可在程序中自行指定,有gui界面。-A cubic function to find the optimal solution (minimum), respectively, using a global search method, dichotomy, golden section and Fibonacci optimization method a total of four methods. Function can be specified in the program itself, there are gui interface. Platform: |
Size: 18432 |
Author:尚凯 |
Hits:
Description: The Rosenbrock function is a non-convex function used as a performance test problem for optimization algorithms introduced by Rosenbrock (1960). It is also known as Rosenbrock s valley or Rosenbrock s banana function.
The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult. Platform: |
Size: 1024 |
Author:suci ariani |
Hits:
Description: Stretches contrast on the image and normalize image from 0 to 1
The main difference of this function to the standard streching functions is that
standard function finds global minimum and maximum on the image, then uses
some low and high threshold values to normalize image(values below LowTHR are
equated to LowTHR and values above HighTHR are equated to HighTHR). This function
uses threshold values that are NEXT to miminum and maximum. Thus, we can exclude
image backgound (which is normally zero) and find minimum value on the image itself.
Same consideration goes to high thr. We exclude first global maximum because, if its
a spike, we have better chance with the next value, and if it is not a spike, normally,
next value is quite close to max (assuming smooth image), so our error is sma- Stretches contrast on the image and normalize image from 0 to 1
The main difference of this function to the standard streching functions is that
standard function finds global minimum and maximum on the image, then uses
some low and high threshold values to normalize image(values below LowTHR are
equated to LowTHR and values above HighTHR are equated to HighTHR). This function
uses threshold values that are NEXT to miminum and maximum. Thus, we can exclude
image backgound (which is normally zero) and find minimum value on the image itself.
Same consideration goes to high thr. We exclude first global maximum because, if its
a spike, we have better chance with the next value, and if it is not a spike, normally,
next value is quite close to max (assuming smooth image), so our error is small Platform: |
Size: 1024 |
Author:nourul |
Hits:
Description: 用Diffirential Evolution求解一个复杂方程的最小值-Use DE to find a global minimum in the range [-10, 10] for a rugged function, and the function s search space is R5. Platform: |
Size: 1377280 |
Author:孔焕军 |
Hits:
Description: 多极值点函数,包含有多个极小值点,要找到全局最小值点。属于无约束条件静态随机搜索最优化问题,有详细注解说明。-More extreme value point function, contains multiple minimum points, to find the global minimum point. Static random search optimization problem belongs to the unconstrained conditions, there are detailed notes. Platform: |
Size: 3072 |
Author:zhengyi |
Hits:
Description: K-均值聚类算法,属于无监督机器学习算法,发现给定数据集的k个簇的算法。
首先,随机确定k个初始点作为质心,然后将数据集中的每个点分配到一个簇中,为每个点找距其最近的质心,
将其分配给该质心对应的簇,更新每一个簇的质心,直到质心不在变化。
K-均值聚类算法一个优点是k是用户自定义的参数,用户并不知道是否好,与此同时,K-均值算法收敛但是聚类效果差,
由于算法收敛到了局部最小值,而非全局最小值。
K-均值聚类算法的一个变形是二分K-均值聚类算法,该算法首先将所有点作为一个簇,然后将该簇一分为二,
之后选择其中一个簇继续进行划分,选择哪一个簇进行划分取决于对其划分是否可以最大程度降低SSE的值。
Yahoo有一个placefinder的API可以用于转换地址和经度纬度。
-K-means clustering algorithm, which belongs to unsupervised machine learning algorithms for a given data set k clusters algorithm found.
First, k is randomly determined as a centroid of the initial point, and then assigning each data point to a cluster set, find a nearest point to each centroid,
It is assigned to the corresponding cluster centroids, update the centroids of each cluster until the centroid not changed.
K-means clustering algorithm k is an advantage of user-defined parameters, the user does not know whether it is good, at the same time, K-Means clustering algorithm converges but poor results,
Since the algorithm converges to a local minimum instead of the global minimum.
A variant K-means clustering algorithm is K-means clustering dichotomy algorithm will first of all points as a cluster, then the cluster into two,
After selecting one of the cluster continues to be divided, to choose which one cluster is divided depends on whether you can reduce the value of t Platform: |
Size: 2048 |
Author:iihaozl |
Hits:
Description: Rosenbrock函数优化属于无约束函数优化问题,其全局极小值位于一条平滑而狭长的抛物线形状的山谷底部,且为优化算法提供的信息很少,因此找到其全局极小值就显得很困难。根据Rosenbrock函数的这种特性,专门提出了一种改进的PSO算法,该算法引入三角函数因子,利用三角函数具有的周期振荡性,使每个粒子获得较强的振荡性,扩大每个粒子的搜索空间,引导粒子向全局极小值附近靠近,避免算法过早地收敛,陷入局部最优,从而找到Rosenbrock函数的全局极小值。大量实验结果表明,该算法具有很好的优化性能,为某些领域某些特定的类似于Rosenbrock函数的优化问题提供了一种新的思路。-Rosenbrock function optimization functions are unconstrained optimization problems, its global minimum value at a smooth parabolic shape and narrow mountain valley
Department, and for the optimization algorithm provides little information, and therefore find its global minimum value becomes very difficult. According to this feature Rosenbrock function, specifically mentioning
Out an improved PSO algorithm (PSO-R), the algorithm introduced trigonometric factor, using trigonometric has periodic oscillations, so that each particle to obtain more
Strong oscillations of expanding the search space of each particle, the particle is close to nearby guide global minimum, avoiding premature convergence algorithm, into a local optimum, thus
Finding the global minimum Rosenbrock function. Experimental results show that the algorithm has a good optimizing performance, similar to some certain areas
Rosenbrock function in the optimization problem provides a new way of thinking. Platform: |
Size: 948224 |
Author:丁晓花 |
Hits:
Description: 利用粒子群算法对目标函数的全局极小值进行搜索,找到全局最优(Particle swarm optimization is used to search the global minimum of the objective function to find the global optimum) Platform: |
Size: 3072 |
Author:水阳小孬板
|
Hits:
Description: BP神经网络容易陷于局部极小值,PSO算法在无约束非线性函数优化方面性能优越,通常可以直接找寻到全局最优解,即使不能搜多到全局最优解,也距离全局最优点不远。当然,基本PSO算法陷入局部极值也是有的。对于这个缺点目前还没有找到比较有效、省市的解决方案。本案例实现利用PSO算法和BP算法共同训练神经网络,先将网络进行PSO算法训练,然后BP算法接着进行小范围精细搜索,PSO算法训练神经网络的本质就是将输出误差函数(即能量函数)看成目标函数,PSO对能量函数进行全局寻找最小值。(BP neural networks are prone to local minimum values. The PS algorithm has superior performance in the optimization of unconstrained nonlinear functions. It can usually find the global optimal solution directly. Even if it can not find more global optimal solutions, it is not far from the global best. Of course, there is also a local extreme of the basic PO algorithm. For this shortcoming, there is no more effective, provincial and municipal solution. This case realizes the use of the SO algorithm and the BP algorithm to train the neural network. First, the network is trained in the SO algorithm, and then the BP algorithm is followed by a small range of fine searches. The essence of the PO algorithm training neural network is to regard the output error function(ie, the energy function) as the objective function, and the PO seeks a global minimum value for the energy function.) Platform: |
Size: 3072 |
Author:Katri |
Hits:
Description: PSO粒子群优化算法优化电力系统总能量,求全局最小值。(PSO particle swarm optimization algorithm optimizes the total energy of the power system to find the global minimum.) Platform: |
Size: 17408 |
Author:雨雪霁 |
Hits: