Description: 我做的一个用matlab程序编写的BP算法,是是我毕设的一部分,程序运行绝对没有问题,欢迎大家指正。-I do a Matlab programming with the back-propagation algorithm, is part of a complete, running absolutely no problem, we are happy to correct. Platform: |
Size: 3072 |
Author:FX |
Hits:
Description: 利用MATLAB对神经网络进行编程,用newff()创建两层前向网络。网络输入范围[-1 1],第一层有10个tansig神经元-using MATLAB right neural network programming with newff () to the creation of a two-tier network. Network input range [-1 1], the first layer 10 tansig neurons Platform: |
Size: 5120 |
Author:龙海侠 |
Hits:
Description: The adaptive Neural Network Library is a collection of blocks that implement several Adaptive Neural Networks featuring
different adaptation algorithms.~..~
There are 11 blocks that implement basically these 5 kinds of neural networks:
1) Adaptive Linear Network (ADALINE)
2) Multilayer Layer Perceptron with Extended Backpropagation algorithm (EBPA)
3) Radial Basis Functions (RBF) Networks
4) RBF Networks with Extended Minimal Resource Allocating algorithm (EMRAN)
5) RBF and Piecewise Linear Networks with Dynamic Cell Structure (DCS) algorithm
A simulink example regarding the approximation of a scalar nonlinear function of 4 variables is included-The adaptive Neural Network Library is a collection of blocks that implement several Adaptive Neural Networks featuring different adaptation algorithms .~..~ There are 11 blocks that implement basically these five kinds of neural networks : a) Adaptive Linear Network (ADALINE) 2) Multilayer Layer 102206 with Extended Backpropagation algorithm (EBPA) 3) Radial Basis Functions (RBF) Networks, 4) RBF Networks with Extended Minimal Resource Allocating algorithm (EMRAN) 5) and RBF Networks with Piecewise Linear Dynamic Cell Structure (DCS) algorithm A Simulink example regarding the approximation of a scalar nonlinear function of four variables is included Platform: |
Size: 198656 |
Author:叶建槐 |
Hits:
Description: Batch version of the back-propagation algorithm.
% Given a set of corresponding input-output pairs and an initial network
% [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the
% network with backpropagation.
%
% The activation functions must be either linear or tanh. The network
% architecture is defined by the matrix NetDef consisting of two
% rows. The first row specifies the hidden layer while the second
% specifies the output layer.
%-Batch version of the back-propagation algorithm. Given a set of corresponding input-output pairs and an initial network [W1, W2, critvec, iter] = batbp (NetDef, W1, W2, PHI, Y, trparms) trains the network with backpropagation. The activation functions must be either linear or tanh. The network architecture is defined by the matrix NetDef consisting of two rows. The first row specifies the hidden layer while the second specifies the output layer. Platform: |
Size: 2048 |
Author:张镇 |
Hits:
Description: this a program on matlab backpropagation of error.-this is a program on matlab backpropagation of error. Platform: |
Size: 3072 |
Author:sarvesh |
Hits:
Description: Make a program to train a Backpropagation neural network which simulates the behavior of an XOR gate. Platform: |
Size: 497664 |
Author:Juan Carlos |
Hits:
Description: High information redundancy and correlation in face images result in efficiencies when such images are used directly for recognition. In this paper, discrete cosine transforms are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, a high recognition rate can be achieved by using a very small proportion of transform coefficients. This makes DCT-based face recognition much faster than other approaches.-High information redundancy and correlation in face images result in inefficiencies when such images are used directly for recognition. In this paper, discrete cosine transforms are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, a high recognition rate can be achieved by using a very small proportion of transform coefficients. This makes DCT-based face recognition much faster than other approaches. Platform: |
Size: 25600 |
Author:mhm |
Hits:
Description: A MLP code with backpropagation training algorithm designed for aproximation of functions problems. Platform: |
Size: 1024 |
Author:Paulo |
Hits:
Description: The Adaline is essentially a single-layer backpropagation network. It is trained on a pattern recognition task, where the aim is to classify a bitmap representation of the digits 0-9 into the corresponding classes. Due to the limited capabilities of the Adaline, the network only recognizes the exact training patterns. When the application is ported into the multi-layer backpropagation network, a remarkable degree of fault-tolerance can be achieved.
Platform: |
Size: 3072 |
Author:ali |
Hits: