Introduction - If you have any usage issues, please Google them yourself
w=(randn(1,M)-randn(1,M))/100
d=zeros(1,M)
u=zeros(1,M)
u_out=zeros(1,e_max-M)
f_out=zeros(1,e_max-M)
X delayed input data vector
Y measured signal
W coefficient vector
E enhanced signal
clc
N=30 filter length
M=30 delay
w0=1 initial value for adaptive filter coefficients
SF=2048 factor for reducing the data samples - 11 bit ADC assumed
mu=0.04