Description: 一种新的计算马氏距离的算法,算法采用二次协方差矩阵操作,进而使距离考加入相对性因素。-a new Mahalanobis distance calculation algorithm, the algorithm using quadratic covariance matrix operations, thereby enabling the relative distance to take factors. Platform: |
Size: 1024 |
Author:bodiz2002 |
Hits:
Description: The Mahalanobis Distance Classifier
If one relaxes the assumptions required by the Euclidean classifier and removes the last one, the one
requiring the covariance matrix to be diagonal and with equal elements, the optimal Bayesian classifier
becomes equivalent to the minimum Mahalanobis distance classifier. That is, given an unknown x, it is
assigned to class ωi if Platform: |
Size: 44032 |
Author:nehad |
Hits:
Description: 马氏距离是一种有效地计算两个样本集之间相似度的算法(数据之间协方差距离),与欧式距离相比,它考虑了各种特征之间的联系。本实验旨在通过给出的样本数据,设计一个最小马氏距离分类器并对测试点进行分类,然后将其与最小欧式距离分类器进行比较,实验得出当协方差矩阵为单位阵时,最小马氏距离分类器将与最小欧式距离分类器等价。-Markov distance is an effective method to compute the similarity between the two samples (data covariance distance), compared with the Euclidean distance, which takes into account the link between different characteristics. This experiment aimed at through the given sample data, design a minimum Mahalanobis distance classifier and classify the test points, and then compare it with the minimum Euclidean distance classifier. Experimental results showed that when the covariance matrix is a unit matrix, minimum Mahalanobis distance classifier with the minimum Euclidean distance classifier equivalent. Platform: |
Size: 2048 |
Author:Jam_Jack |
Hits:
Description: z=mahalanobis_classifier(m,S,X).This function outputs the Mahalanobis classifier,
given the mean and covariance matrices.
• M: the number of classes.
• l: the number of features (for each feature vector).
• N: the number of data vectors.
• m: lxM matrix, whose j-th column corresponds to the mean of the j-th class.
• S: lxlxM matrix. S(:,:,j) is the covariance matrix of the j-th normal distribution.
• P: M-dimensional vector whose j-th component is the a priori probability of the j-th class.
• X: lxM data matrix, whose rows are the feature vectors, i.e., data matrix in scikit-learn convention.
• y: N-dimensional vector containing the known class labels, i.e., the ground truth, or target
vector in scikit-learn convention.
• z: N-dimensional vector containing the predicted class labels, i.e., the vector of predicted class
labels in scikit-learn convention.-z=mahalanobis_classifier(m,S,X).This function outputs the Mahalanobis classifier,
given the mean and covariance matrices.
• M: the number of classes.
• l: the number of features (for each feature vector).
• N: the number of data vectors.
• m: lxM matrix, whose j-th column corresponds to the mean of the j-th class.
• S: lxlxM matrix. S(:,:,j) is the covariance matrix of the j-th normal distribution.
• P: M-dimensional vector whose j-th component is the a priori probability of the j-th class.
• X: lxM data matrix, whose rows are the feature vectors, i.e., data matrix in scikit-learn convention.
• y: N-dimensional vector containing the known class labels, i.e., the ground truth, or target
vector in scikit-learn convention.
• z: N-dimensional vector containing the predicted class labels, i.e., the vector of predicted class
labels in scikit-learn convention. Platform: |
Size: 1024 |
Author:mnzars |
Hits: