This page is dedicated to start discussions about the article "Complexity versus Agreement for Many Views".
Feel free to post any comment, sugggestion, question, correction, extension... I will enjoy discussing this with you.
"The paper considers the problem of semi-supervised multi-view classification, where each view corresponds to a Reproducing Kernel Hilbert Space. An algorithm based on co-regularization methods with extra penalty terms reflecting smoothness and general agreement properties is proposed. We first provide explicit tight control on the Rademacher
(L1 ) complexity of the corresponding class of learners for arbitrary many views, then give the asymptotic behavior of the bounds when the co-regularization term increases, making explicit the relation between consistency of the views and reduction of the search space. Third we provide an illustration through simulations on toy examples. With many views,
a parameter selection procedure is based on the stability approach with clustering and localization arguments. The last new result is an explicit bound on the L2 -diameter of the class of functions."
- Future work:
It would be interesting to compare the selection method proposed here that uses a data-driven penalty with standard model selection methods (P.Massart, S.Arlot...etc)