5terre



RegML 

Regularization Methods for High Dimensional Learning

 Course organized within the PhD Program in Computer Science for the
PhD School in Sciences and Technologies for Information and Knowledge (STIC)
PhD School in Life and Humanoid Technologies
Dates and registration

The course will be held on June, 3-7 2013

Registration for the course is closed.

Course at a Glance

Regularization Methods for High Dimensional Learning (RegML) is a 20 hours course including practical laboratory session.

The course covers the foundations as well as the recent advances in Computational Learning with particular emphasis on the analysis of high dimensional data and focusing on a set of core techniques, namely regularization methods.

See the synopsis and the syllabus for more details.

The course is co-organized by the SLIPGURU group at the University of Genova and the IIT@MIT Lab, a joint lab between the Istituto Italiano di Tecnologia (IIT) the Massachusetts Institute of Technology (MIT).

Instructors
Francesca Odone -- University of Genova, francesca.odone@unige.it
Lorenzo Rosasco -- Istituto Italiano di Tecnologia (IIT) and Massachusetts Institute of Technology (MIT). , lrosasco@mit.edu
Venue

The course will be held at the Department of Informatics Bioengineering Robotics and Systems Engineering (DIBRIS) of the University of Genova in Via Dodecaneso 35, 16146 Genova.

When looking for directions keep this address in mind, since DIBRIS has multiple locations (non so close to one another).  Here you can find directions and travelling information.

Here you can find a list of hotels near the department (~ 20' walk) or in the city centre (~20' by bus).

NEW!!Here is a list of places where you can go for lunch. And here is a link to the online google map

Genova is in the region of Liguria in the Italian Riviera (see here or here for some nice pics and a video

Synopsis

Understanding how intelligence works and how it can be emulated in machines is an age old dream and arguably one of the biggest challenges in modern science. Learning, with its principles and computational implementations, is at the very core of this endeavor. Recently, for the first time, we have been able to develop artificial intelligence systems able to solve complex tasks considered out of reach for decades. Modern cameras recognize faces, and smart phones voice commands, cars can ?see? and detect pedestrians and ATM machines automatically read checks. In most cases at the root of these success stories there are machine learning algorithms, that is softwares that are trained rather than programmed to solve a task.

Among the variety of approaches to modern computational learning, we focus on regularization techniques, that  are key to high- dimensional learning. Regularization methods allow to treat in a unified way a huge class of diverse approaches, while providing tools to design new ones.

Starting from classical notions of smoothness, shrinkage and margin, the course will cover state of the art techniques based on the concepts of geometry (aka manifold learning), sparsity and a variety of algorithms for supervised learning, feature selection, structured prediction, multitask learning and model selection. Practical applications for high dimensional problems will be discussed.   

?The classes will focus on algorithmic and methodological aspects, while trying to give an idea of the underlying theoretical underpinnings. Practical laboratory sessions will give the opportunity to have hands on experience.

Slides of the classes will be posted on this website and scribes of most classes, as well as other material, can be found on the 9.520 course webpage at MIT.

Syllabus

- each class is 90 min. no breaks -

class 1 (C1) Welcome. Introduction to Learning
class 2 (C2) Kernels, Dictionaries, and Regularization
class 3 (C3) Regularization Networks and Support Vector Machines
class 4 (C4) Error Analysis and Parameter Choice
class 5 (C5) Lab 1 - Binary classification and model selection
class 6 (C6) Spectral methods for supervised learning
class 7 (C7) Multi-Output Learning
class 8 (C8) Lab 2 - Spectral filters and multi-class classification
class 9 (C9) Sparsity-based Regularization
class 10 (C10) Multiple Kernel Learning
class 11 (C11) Lab 3  - Sparsity-based learning
class 12 (C12) Manifold regularization
class 13 (C13) Applications to high dimensional problems

Schedule and rooms

MON 3 TUE 4
WED 5
THU 6
FRI 7
9:30-11:00
-
C3
C6
C9
C12
11:30-13:00
C1
C4
C7
C10
C13
14:30-16:00
C2 
C5 (lab)
C8 (lab)
C11 (lab)
-

- room 322 (sala conferenze) -  3rd floor

Credits and Exam (optional)
If you attend most of the classes you will be attributed 2 credits (according to the ECTS grading scale). The credits attribution will be reported on the certificate of attendance we will handle at the end of the course.

If you need an evaluation the exam will consist in a
b
rief report (~ 5 pages + 1 page of figures) of the labs.
Submission deadlines: 15/09 and 01/12/2013. Submit your report (one or multiple authors are fine) by sending an email to both Francesca and Lorenzo and specifying the type of evaluation you need (eg., passed / ranking / marking...)
Prerequisites
Multivariate Calculus, Basic Probability Theory, Matlab.
Short reading list

General references are

  • Bousquet, O., S. Boucheron and G. Lugosi. Introduction to Statistical Learning Theory. Advanced Lectures on Machine Learning Lecture Notes in Artificial Intelligence 3176, 169-207. (Eds.) Heidelberg, Germany (2004)
  • F. Cucker and S. Smale. On The Mathematical Foundations of Learning. Bulletin of the American Mathematical Society, 2002.
  • T. Evgeniou and M. Pontil and T. Poggio. Regularization Networks and Support Vector Machines. Advances in Computational Mathematics, 2000.
  • T. Poggio and S. Smale. The Mathematics of Learning: Dealing with Data. Notices of the AMS, 2003
  • L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1997.
  • V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
  • T. Hastie, R. Tibshirani, J. H. Friedman. The Elements of Statistical Learning, Springer 2001.
  • I. Steinwart and A. Christmann. Support vector machines. Springer, New York, 2008.
  • Cucker, Felipe; Zhou, Ding-Xuan Learning theory: an approximation theory viewpoint. 
    With a foreword by Stephen Smale. Cambridge Monographs on Applied and Computational Mathematics.
    Cambridge University Press, Cambridge, 2007. xii+224 pp. 
Photos ... spot the differences!
unigeiit