Support Vector Machines (SVMs) belong to the class of supervised learning techniques used for classification and regression. SVMs are becoming more and more popular among researchers thanks to their high performance and generalization abilities. The theory of SVMs is applicable to a variety of problems including binary or multi-class classification, pattern recognition, data clustering and categorization, data mining, statistical learning and detection, information retrieval or predictive control, spanning many different fields from computer science and engineering applications, through financial modeling and predictions, to applications in bioinformatics and bioengineering. It is also a tool of choice in blind steganalysis whose goal is to detect whether or not data was hidden in images.
The focus of this course is on obtaining practical experience with using SVMs, as well as on understanding the core concepts the theory is built on. In the course, the following topics will be covered. Introduction to hard-margin and soft-margin SVMs. Concepts of kernels and feature spaces. Loss function, regularization. Basics of optimization and quadratic programming. Elements of statistical learning theory and generalization theory. Introduction to steganography and application of SVMs to feature-based steganalysis.
Linear algebra, calculus and elementary statistics. Basics of programming (MATLAB).
- Jan Kodovský, office LSG-606, email: firstname.lastname@example.org
- Jessica Fridrich, office EB-Q16, email: email@example.com
TR 2:50 pm - 4:15 pm, LH-12, January 25 - May 07, 2010