Poggio T., Voinea S., Rosasco L.,
Cornell University Library , , 2011Cornell University Library
arXiv:1105.4701v2 [cs.LG]
May 25, 2011, revised September 9, 2011
Abstract: In batch learning, stability together with existence and uniqueness of the solution corresponds to well-posedness of Empirical Risk Minimization (ERM) methods; recently, it was proved that CVloo stability is necessary and sufficient for generalization and consistency of ERM ([9]). In this note, we introduce CVon stability, which plays a similar role in online learning. We show that stochastic gradient descent (SDG) with the usual hypotheses is CVon stable and we then discuss the implications of CVon stability for convergence of SGD.
This report describes research done within the Center for Biological & Computational Learning in the Department of Brain & Cognitive Sciences and in the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research was sponsored by grants from: AFSOR, DARPA, NSF. Additional support was provided by: Honda R&D Co., Ltd., Siemens Corporate Research, Inc., IIT, McDermott Chair.