![Buchcover Transactions on Machine Learning and Data Mining | EAN 9783940501127 | ISBN 3-940501-12-3 | ISBN 978-3-940501-12-7](https://buch.isbn.de/cover/9783940501127.jpg)
×
Wissenschaftler, Fachpublikum
Transactions on Machine Learning and Data Mining
Volume 2 - Number 2 - October 2009
herausgegeben von Petra PernerThis is the fourth issue of the International Journal on Machine Learning and Data Mining.
The issue presents two interesting approaches on hot topics in Machine Learning and Data Mining. One is about feature selection and the other one about clustering.
Feature selection plays an important role when building a classifier. Too many features or features of less importance can lead to a classifier that does not perform well. To include the kind of classifier into the feature selection process is a method that has attracted the research community since the 1990s. Seredin et al. study feature selection in combination with a decision rule classifier. They solve the problems as a numerical optimization problem, where feature selection and regularization of decision rules are combined into a single procedure.
Dvoenko studies two different clustering algorithms; the k-means clustering algorithm and the Kozinets ´s linear decision rule, the data are represented by pairwise similarities instead of by the feature representation. He argues that in many applications the pair-wise similarity of two objects is much easier to obtain than the true features and the appropriate similarity measure. Therefore, we need algorithms that can work on the similarities instead of on the feat ure space. The two algorithms mentioned above are modified and applied to different data sets. Finally, he gives results and explains the meaning of the results
The issue presents two interesting approaches on hot topics in Machine Learning and Data Mining. One is about feature selection and the other one about clustering.
Feature selection plays an important role when building a classifier. Too many features or features of less importance can lead to a classifier that does not perform well. To include the kind of classifier into the feature selection process is a method that has attracted the research community since the 1990s. Seredin et al. study feature selection in combination with a decision rule classifier. They solve the problems as a numerical optimization problem, where feature selection and regularization of decision rules are combined into a single procedure.
Dvoenko studies two different clustering algorithms; the k-means clustering algorithm and the Kozinets ´s linear decision rule, the data are represented by pairwise similarities instead of by the feature representation. He argues that in many applications the pair-wise similarity of two objects is much easier to obtain than the true features and the appropriate similarity measure. Therefore, we need algorithms that can work on the similarities instead of on the feat ure space. The two algorithms mentioned above are modified and applied to different data sets. Finally, he gives results and explains the meaning of the results