AnyBook4Less.com | Order from a Major Online Bookstore |
![]() |
Home |  Store List |  FAQ |  Contact Us |   | ||
Ultimate Book Price Comparison Engine Save Your Time And Money |
![]() |
Title: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning) by Bernhard Schölkopf, Alexander J. Smola ISBN: 0-262-19475-9 Publisher: MIT Press Pub. Date: 15 December, 2001 Format: Hardcover Volumes: 1 List Price(USD): $65.00 |
Average Customer Rating: 5 (2 reviews)
Rating: 5
Summary: interesting introduction to support vector machine learning
Comment: The authors are young researchers who did their Ph.D. research in this rapidly developing branch of pattern recognition. Because they are young and are at the state of the art in the filed the book has sevral advantages and disadvantages and what I see as a disadvantage someone else might view as an advantage. Anyway here is my view.
Advantage 1: Pattern recognition is a field of many disciplines. It has been studied by statisticians, mathematician, probabilists and engineering and people that call themselves computer scientists specializing in artificial intelligence. The field is old and has a long history but each discipline has developed their own jargon and many times the wheel has been reinvented. The advantage of this book is that these young scientists don't see that awful history. They have learned and mastered their subject in a basically engineering jargon but they include many concepts from statistics and statistical learning theory that are not common to engineering texts. This includes such topics as robust regression, ridge regression and spline estimation. Much of the classical statistical literature is cited. The book contains over 600 references including much of the authors own work.
Disadvantage 1: Because they are young they miss some of the important historical literature and key texts. I found it a little disappointing that the bootstrap which is a statistical tool that has played a major role in discriminant analysis (particularly in the estimation of classification error rates) was completely overlooked. Also although many important texts on pattern recognition, machine learning and discriminant analysis are cited the fine text by McLachlan is overlooked as is the recent relevant text by Hastie, Tibshirani and Friedman.
Advantage 2: This book highlights the work of Vapnik and Chervonenkis and provides nice concise descriptions that one can easily refer to when needed. The mathematics is deep and includes reproducing kernel Hilbert space and many important properties from functional analysis and statistical theory.
Disadvantage 2: The authors are more experienced at writing professional papers than at writing text books. Consequently the book does not flow well and the authors freely admit in their preface that it is best not to read the book in sequential order but rather to take the suggestions in the preface that differ based on the readers background and interest.
Having said all this, for someone like me, who is very knowledgeable about statistical pattern recognition this is a great text for getting me up to speed on an exciting new area that I know very little about. I became curious about it when I started reading Vapnik recently.
I am hoping that a careful reading of this book will give me an intuition about why this approach that incorporates kernel methods can be a powerful tool in pattern recognition and classification.
This book should be a useful reference for anyone interested in this research area. It could be used in an engineering or statistics course in pattern recognition at either the undergraduate or graduate levels depending on what material is covered.
In a recent communication with Bernhard Scholkopf I learned that his book was sent for publication before the Hastie et al. book went to press. So that is the only reason it wasn't referenced. I think that point is worth my mentioning in an editing of this review. Also on reflection I do not think the disadvantages are so great as to remove a star. So it is 5 stars for them.
I can only hope that they will reference the work of McLachlan and Hastie et al. in their future books and research on this subject.
Rating: 5
Summary: Detailed and comprehensive
Comment: This book should be on the bookshell of anyone interested in kernel methods. The authors managed to make a clear and comprehensive enough textbook such that it is well suited for graduate students. But it also contains all the state of the art results of the domain, and its scope is wider than other similar books. For this reason, this book should be very useful to any researcher in the machine learning field.
![]() |
Title: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods by Nello Cristianini, John Shawe-Taylor ISBN: 0521780195 Publisher: Cambridge University Press Pub. Date: 23 March, 2000 List Price(USD): $53.00 |
![]() |
Title: The Elements of Statistical Learning by T. Hastie, R. Tibshirani, J. H. Friedman ISBN: 0387952845 Publisher: Springer Verlag Pub. Date: 09 August, 2001 List Price(USD): $79.95 |
![]() |
Title: Statistical Learning Theory by Vladimir N. Vapnik ISBN: 0471030031 Publisher: Wiley-Interscience Pub. Date: 16 September, 1998 List Price(USD): $115.00 |
![]() |
Title: Learning Kernel Classifiers: Theory and Algorithms (Adaptive Computation and Machine Learning) by Ralf Herbrich ISBN: 026208306X Publisher: MIT Press Pub. Date: 15 December, 2001 List Price(USD): $42.00 |
![]() |
Title: The Nature of Statistical Learning Theory (Statistics for Engineering and Information Science) by Vladimir Naumovich Vapnik ISBN: 0387987800 Publisher: Springer Verlag Pub. Date: December, 1999 List Price(USD): $79.95 |
Thank you for visiting www.AnyBook4Less.com and enjoy your savings!
Copyright� 2001-2021 Send your comments