Deep Neural Nets with Randomized Learning Techniques

Deep Neural Nets with Randomized Learning Techniques

Overview

Instructor:

RSVP

Details

Date:

Location:

Time:

Duration:

Cost:

Description

Deep neural nets (DNNs) have undoubtedly established themselves as outstanding tools in the machine learning community. Training DNNs by millions of instances however leads to a huge computational burden due to the fact that gradient descent algorithms need a quite large number of iterations before finding a local minimum/maximum solution, even with modern GPU/TPU hardware acceleration. Some researchers and practitioners have attempted to use randomized learning techniques (RLTs) to develop fast algorithms for training DNNs (in a broad sense, random weight initializations used in training DNNs in practice can be viewed as RLTs). Others have paved the way for studying theoretically the capability of DNNs, especially the modern over parameterized ones exhibiting certain intriguing phenomena such as the ability to fit random labels and double descent. It is important, for both the academic and industrial communities, to investigate theoretically the principles behind the feasibility of randomness in DNNs (e.g., what are the representational capabilities of DNNs with some randomly assigned weights) and to develop advanced randomized learning techniques capable of building powerful DNNs that avoid high computational cost. Another important point is the empirical demonstration of the good potential of randomized learning techniques/strategies for speeding the training of DNNs without decreasing the performance and further verify their possible advantages for real world applications with large scale training dataset setups and/or real time processing requirements. This session calls for contributions which provide theoretical studies and fundamentals, algorithmic developments with advanced applications, implementation and design of user friendly computing tools/platforms, to boost the usage of randomized learning techniques in DNNs and further motivate new insights/interpretations for the role of randomness in DNNs.

Topics of interest include, but are not limited to, the following:

Neural Nets with Random Weights (NNRWs)

      • Recurrent and recursive neural nets with random features

      • Auto encoding with random features

      • Deep neural nets with randomness

      • Representation learning with random features

      • Kernel approximation with random features

      • Learning theory of NNRWs

      • Metric learning with random features

      • Randomized dimensionality reduction techniques

      • Interpretability/ explainability of NNRWs

      • Federated learning with random features

      • Random learning methods for geometric deep learning

      • Randomized techniques with applications in CV and NLP

Organizers:

Ming Li, Zhejiang Normal University, China.

Email: mingli@zjnu.edu.cn

Ming Li received his PhD degree from the Department of Computer Science and IT at La Trobe University, Australia. He is currently a “Shuang Long Scholar” Distinguished Professor with the Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, China. He has published in top-tier journals and conferences, including IEEE TCYB (one paper is ranked as ESI Highly Cited Paper), ACM TMOS, IEEE TII, Neural Networks, Information Sciences, NeurIPS, ICML. He is a member of IEEE, a member of China Computer Federation (CCF), a member of the Chinese Association for Artificial Intelligence (CAAI), and an accredited member of the Australian Mathematical Society (AustMS). He is a regular reviewer for top journals including IEEE TNNLS, IEEE TCYB, IEEE TKDE, Neural Networks, Information Sciences, Neurocomputing. His research interests include machine learning, neural networks for graphs, graph learning representation, randomized learning algorithms, educational data analytics, and approximation theory. He, as a leading guest editor, is currently organizing a special issue, i.e., “Deep Neural Networks for Graphs: Theory, Models, Algorithms and Applications”, in IEEE TNNLS.

Giorgio Stefano Gnecco, IMT School for Advanced Studies, AXES Research Unit, Lucca, Italy

Email: giorgio.gnecco@imtlucca.it

Giorgio Gnecco obtained the Laurea (M.Sc.) degree cum laude in Telecommunications Engineering and the Ph.D. degree in Mathematics and Applications, both from the University of Genoa, Italy. Since 2020, he has been Associate Professor in Operations Research at IMT Lucca, Italy, and Action Editor of the international journal Neural Networks, having been previously Assistant Professor at IMT Lucca from 2013 to 2020, and Associate Editor of the international journal IEEE Transactions on Neural Networks and Learning Systems from 2013 to 2019. He is currently Guest Editor of the special issue “Deep Neural Networks for Graphs: Theory, Models, Algorithms and Applications” in IEEE Transactions on Neural Networks and Learning Systems. His scientific production includes the coauthored book “Neural Approximations for Optimal Control and Decision” (Springer, 2020), 80 papers in international journals, 16 international book chapters, and 70 international conference papers/abstracts. His current research interests include machine learning theory and applications, neural networks, big data, multi-agent control systems, game theory, graph theory, and optimization applied to telecommunications networks, to economics, and to civil engineering.

Marcello Sanguineti, University of Genoa, Italy

Email: marcello.sanguineti@unige.it

Marcello Sanguineti (Ph.D. in Electronic Engineering and Computer Science) is Full Professor in Operations Research at DIBRIS, University of Genoa and Research Associate at INM - National Research Council of Italy. He is also Visiting Professor at IMT - School for Advanced Studies, Lucca (Italy) and Research Associate at IIT - Italian Institute of Technology. He is Associate Editor of IEEE Transactions on Neural Networks and Learning Systems, Neural Networks, Neurocomputing, and Neural Processing Letters. He is currently Lead Guest Editor of the special issue “Optimization in Machine Learning” in the international journal Soft Computing and Guest Editor of the special issue “Deep Neural Networks for Graphs: Theory, Models, Algorithms and Applications” in IEEE Transactions on Neural Networks and Learning Systems. He served as a Guest Editor for the international journals Computers and Operations Research and Computational Management Science. He co-authored more than 200 research papers in archival journals, book chapters, and international conference proceedings and the book “Neural Approximations for Optimal Control and Decision” (Springer, 2020). He was the Chair of the Organizing Committee of the Conference ICNPAA2008 and member of the Organizing Committees of the conferences AIRO2007 and ODS2019. He coordinated several international research projects on mathematics of neural computation and approximate optimization. His main research interests are machine learning, neural networks for optimization, infinite-dimensional programming, network and team optimization, effective computing, and game-theoretical models.

Other Workshops