CLOP Feature Extraction book release -- November 1, 2005

This is a list of known bugs and things to do. If you found other bugs and/or
want to help fix some of these problems, write to:
modelselect@clopinet.com

==============================================================================
- April 22, 2006: Clop release with sample code for the feature extraction book.

- February 08: small change in algorithm/test, line 87
            if k==1 & issparse(get_x(dt)), X=sparse(X); end % IG Feb 7, 2006
  Add methods to compute statistics in @algorithm and sample_code.

- December 22: Add in chain several methods to support filter methods.

- December 17: small correction in eval_name. Remove child from fields to be displayed in @svc/svc. Change in @chain/get_fidx. 

- November 24: Modified create_data_struct not to load the test data at training time and alleviate memory problems.

- November 3: 
@algorithm: allows to derive algorithm objects for which training is undefined. Define get_fidx for all algorithms.
@chain: get_fidx defined for chain.

- November 1: small exception in "chain" to accomodate "zarbi".

- October 14: Compared to October 12, corrected error in bias in the LibSVM interface.

- October 12: Compared to the October 7th release, a bug in main.m was fixed (the same
index j was used in nested loops.) We made it that no zeros are ever returned
as a discriminant value by breaking ties using the class frequencies.

- October 7: Compared to the September 30th release, the "ensemble" object was added. Each
learning object has now a "clean" method to remove the biggest parameter
structures so that models can be easily uploded for "bonus" entries. The
Linux version of LibSVM was checked. RF stil does not work under Unix.

- There are several bugs in RF:
* RF crashes with a segmentation fault for large datasets. 
* We did not get RF to work for Nova and Hiva (problem of categorical variables).

- Class balancing is handled differently by the different methods. We need to experiment
a little and choose one "good" way.

- "Filters" and "wrappers" using "any" learning algorithm are not supported. Currently, special cases are implemented. This may be better for testing model selection otherwise it gives too much flexibility.

- Change ensemble to progressively add new models?

- Some of the methods were copied over, not sure they work for all objects. Must be checked. In particular, look at subsref.m and subsasgn.m. 

- We will be providing later a routine to check that a model is well formed to qualify for a bonus entry.

- The soft margin parameter C in svc is presently not available. We may add it later.

- The role of beta in the neural net is unclear. We may add this other hyperparameter later.

- Memory use and compute time need to be optimized.

- Some hyperparameters could optionally be "learned" (set internally with heuristics or implicit leave-one-out.)

- The score code must be simplified and improved.

- Some speed-up needed: testing of kernel methods; Relief. Option to stop RFE when the correct number of features is reached.

- What is the right chunk size in test?

- Write a routine to validate the Clop models (and perhaps extract their HP.)

- Changes in the computation of the error bar: affects sample_code and score_code.
