Adding a New Algorithm

In Machine Learning a wide plethora of learning algorithms have been defined for different purposes and many variations of the existing ones as well as completely new learning methods are often proposed.
Furthermore, learning algorithms can be combined in ensemble methods, composed in meta learning algorithms (like AdaBoost) or used according to particular learning schemas like One-vs-All or One-vs-One  schemas for multi-classification. KeLP tries to formalize this large world proposing a comprehensive set of interfaces to support the implementation of new algorithms:

  • ClassificationLearningAlgorithm: for algorithms that learn from labelled data how to classify new instances. The current implementations include C-SVM, ν-SVM, OneClassSVM  (Chang and Lin, 2011) and Pegasos (Shalev-Shwartz et al., 2007);
  • OnlineLearningAlgorithm:  for algorithms that support an incremental learning, like the Perceptron (Rosenblatt, 1957) or the PassiveAggressive algorithms (Crammer et al., 2006);
  • RegressionLearningAlgorithm: for algorithms that learn from labelled data a regression function. Currently KeLP implements ε-SVR, Linear and Kernelized PassiveAggressiveRegression;
  • ClusteringAlgorithm: for algorithms that are able to organize a dataset into clusters. Currently KeLP implements KernelBased K-Means (Kulis et at,. 2005);
  • MetaLearningAlgorithm: for learning approaches that rely on another algorithms. We implemented Multi-classification schemas, e.g. One-VS-One and One-VS-All as well as budgeted policies of online learning algorithms like RandomizedPerceptron (Cavallanti et al., 2006) and Stoptron (Orabona et al., 2008)
  • EnsembleLearningAlgorithm: for algorithms that combine other algorithms in order to provide a more robust solution
  • BinaryLearningAlgorihm: this interface is implemented to derive binary classifiers. Notice that also One-Class classifier can extend this interface as they learn a binary classifier even being not provided of negative examples. Moreover, this interface can be used also to implement univariate regression algorithms, such as ε-SVR. (Chang and Lin, 2011)

Algorithms exploiting kernel functions must implement the interface KernelMethod, while algorithms operating directly in an explicit feature space must implement the interface LinearMethod. In describing how to implement a new LearningAlgorithm, we will use the mini-batch version of Pegasos as a practical example. In its base version, Pegasos is an efficient solver for linear binary SVM; then we have to implement ClassificationLearningAlgorithm, BinaryLearningAlgorithm and LinearMethod. Optionally, the class can be annotated with @JsonTypeName in order to specify an alternative “name type” to be used during the serialization/deserialization mechanism.

KeLP decouples learning algorithms from prediction functions that are used to provide predictions on new data. Regarding PegasosLearningAlgorithm, the proper prediction function is BinaryLinearClassifier, that is already implemented in the platform. Then, we have to add a new corresponding parameter, i.e. classifier. An empty constructor must be defined and all the learning parameters must be associated to the corresponding getter and setter methods, in order to make the JSON serialization/deserialization mechanism work. In this case, the learning parameters are the regularization coefficient lambda, the number of iterations T, the number of examples k exploited during each mini-batch iteration.

According to the selected interfaces some specific methods have to be implemented. As any LinearMethod we need to implement setRepresentation and getRepresentation which refer to the String identifier for the specific Representation the algorithm must exploit. Obviously a corresponding parameter must created, i.e. representation

As any BinaryLearningAlgorithm we need to implement getLabel and setLabel which refer to the Label that must considered as positive class. Obviously a corresponding parameter must created, i.e. label. Moreover, to be compliant with the LearningAlgorithm interface, the methods getLabels and setLabels must be implemented that, for the special case of BinaryLearningAlgorithm must operated on a single entry List:

Finally, as any ClassificationLearningAlgorithm we need to implement:

  • getPredictionFunction: the proper prediction function must be returned, in this case the classifier object. It is a good practice to specialize the returning type, i.e. the method must return a BinaryLinearClassifier instead of a generic Classifier;
  • duplicate: the instance of an algorithm must be created and returned, copying all the learning parameters (the state variables must not be set to their default value). It is a good practice to specialize the returning type, i.e. the method must return a PegasosLearningAlgorithm instead of a generic LearningAlgorithm;
  • reset: it must force the algorithm to it default state, forgetting the learning process already conducted;
  • learn: this method must implement the learning process of Pegasos.


Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011.

S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub–gradient solver for SVM. In Proceedings of the International Conference on Machine Learning, 2007

Rosenblatt, Frank, The Perceptron–a perceiving and recognizing automaton. Report 85-460-1, Cornell Aeronautical Laboratory, 1957.

Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. On- line passive-aggressive algorithms. Journal of Machine Learning Research, 7:551–585, December 2006. ISSN 1532-4435.

Brian Kulis, Sugato Basu, Inderjit Dhillon, and Raymond Mooney. Semi-supervised graph clustering: A kernel approach. In Proceedings of the ICML 2005, pages 457–464, New York, NY, USA, 2005. ACM.

Giovanni Cavallanti, Nicolo’ Cesa-Bianchi and Claudio Gentile. Tracking the best hyperplane with a simple budget perceptron. In proc. of the nineteenth annual conference on Computational Learning Theory. pp. 483–498. Springer-Verlag (2006)

Orabona, F., Keshet, J., Caputo, B.: The projectron: a bounded kernel-based perceptron. In Proceedings of ICML ’08. pp. 720–727. ACM, USA (2008)