Optimization-based learning theory

N

Supervisor: Associate Professor, Dr. Nitanda Atsushi

  • Programme:

    A*STAR Graduate Scholarship
  • Research Institute:

    Centre for Frontier Artificial Intelligence Research (CFAR)

Project Description

Optimization-based learning theory is a recent trend in deep learning theory community because the optimization method decides the generalization performance of overparameterized models. A stochastic gradient descent (SGD) is known to perform well for training neural networks and often outperforms other learning methodologies. Therefore, a fundamental question is why SGD performs well for overparameterized models. Given this, we theoretically work on the following two questions:
(Global convergence): Why SGD converges to an optimal solution of the empirical loss despite its nonconvex loss landscape.
(Implicit bias): Why SGD converges to a better solution in generalization despite various solutions.
Analyzing diffusion models adn LLMs are also in this scope.

FAQs /scholarships/home/faqs

For NSS BS: When can I start applying for the scholarship?

Application for the NSS (BS) commences on 1 July every year and closes on 1 March of the following year.

Shortlisted applicants will be interviewed between March and May.

Do I need to secure a place at an overseas university before applying for the scholarship?

No, you may apply for the scholarship even if you have not secured admission to any university yet.

Please note that you should only accept a university offer after obtaining A*STAR’s approval for your choice of university and course of study.