Description: High information redundancy and correlation in face images result in inefficiencies when such images are used directly for recognition. In this paper, discrete cosine transforms are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, a high recognition rate can be achieved by using a very small proportion of transform coefficients. This makes DCT-based face recognition much faster than other approaches.
File list (Check if you may need any files):
sourcecode.m
readme.m
dctann.p