C4.5 algorithm

C4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan.[1] C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as a statistical classifier.

It became quite popular after ranking #1 in the Top 10 Algorithms in Data Mining pre-eminent paper published by Springer LNCS in 2008.[2]

Algorithm

C4.5 builds decision trees from a set of training data in the same way as ID3, using the concept of information entropy. The training data is a set S = {s_1, s_2, ...} of already classified samples. Each sample  s_i consists of a p-dimensional vector (x_{1,i}, x_{2,i}, ...,x_{p,i}) , where the  x_j represent attribute values or features of the sample, as well as the class in which  s_i falls.

At each node of the tree, C4.5 chooses the attribute of the data that most effectively splits its set of samples into subsets enriched in one class or the other. The splitting criterion is the normalized information gain (difference in entropy). The attribute with the highest normalized information gain is chosen to make the decision. The C4.5 algorithm then recurs on the smaller sublists.

This algorithm has a few base cases.

Pseudocode

In pseudocode, the general algorithm for building decision trees is:[3]

  1. Check for base cases
  2. For each attribute a
    1. Find the normalized information gain ratio from splitting on a
  3. Let a_best be the attribute with the highest normalized information gain
  4. Create a decision node that splits on a_best
  5. Recur on the sublists obtained by splitting on a_best, and add those nodes as children of node

Implementations

J48 is an open source Java implementation of the C4.5 algorithm in the Weka data mining tool.

Improvements from ID.3 algorithm

C4.5 made a number of improvements to ID3. Some of these are:

Improvements in C5.0/See5 algorithm

Quinlan went on to create C5.0 and See5 (C5.0 for Unix/Linux, See5 for Windows) which he markets commercially. C5.0 offers a number of improvements on C4.5. Some of these are:[5][6]

Source for a single-threaded Linux version of C5.0 is available under the GPL.

See also

References

  1. Quinlan, J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.
  2. Umd.edu - Top 10 Algorithms in Data Mining
  3. S.B. Kotsiantis, Supervised Machine Learning: A Review of Classification Techniques, Informatica 31(2007) 249-268, 2007
  4. J. R. Quinlan. Improved use of continuous attributes in c4.5. Journal of Artificial Intelligence Research, 4:77-90, 1996.
  5. Is See5/C5.0 Better Than C4.5?
  6. M. Kuhn and K. Johnson, Applied Predictive Modeling, Springer 2013

External links

This article is issued from Wikipedia - version of the Tuesday, March 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.