Description: 用matlab解决一些数值分析中常用的算法,如牛顿法、gauss、romberg等-Using matlab to solve some numerical analysis of commonly used algorithms, such as Newton method, gauss, romberg, etc. Platform: |
Size: 2048 |
Author:duhongye |
Hits:
Description: 高斯 牛顿 雅克比 三个人的方法解方程
源代码完整 可编译-Gauss-Newton Jacobian method to solve the equations of the three individuals complete source code can be compiled Platform: |
Size: 634880 |
Author:kyo |
Hits:
Description: Originally written for a PB2000 Casio computer program to solve power flow by five solutions methods (Gauss, Gauss-Seidel, Newton-Raphson-polar, Neuton-Raphson-Cartesian and Decoupled).-Originally written for a PB2000 Casio computer program to solve power flow by five solutions methods (Gauss, Gauss-Seidel, Newton-Raphson-polar, Neuton-Raphson-Cartesian and Decoupled). Platform: |
Size: 5120 |
Author:sbach |
Hits:
Description: Gauss Seidel 解数学矩阵方程
Bisection 解1元高次方程
Newton Method 解高次方程
Taylor series 泰勒公式求方程近似解-In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. Platform: |
Size: 2048 |
Author:yuguo |
Hits:
Description: In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach. Platform: |
Size: 2757 |
Author:nechaev_V |
Hits:
Description: The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required. Platform: |
Size: 2887 |
Author:nechaev_V |
Hits: