Introduction - If you have any usage issues, please Google them yourself
The leave-one-out cross-validation scheme is a method for estimating the average generalization error. When calling [Eloo, H] = loo (NetDef, W1, W2, PHI, Y, trparms) with trparms (1)> 0, the network will be retrained a maximum of trparms (1) iterations for each input-output pair in the data set, starting from the initial weights (W1, W2). If trparms (1) = 0 an approximation to the loo-estimate based on linear unlearning is produced. This is in general less accurate, but is much faster to calculate.