Focusing on the application of probabilistic and statistical principles to generative modeling, this sharing explores the connections between different generative models in a statistical learning framework, including score-based and flow-based generative models, in a step-by-step manner. Sharing will start with basic concepts to ensure the audience understands the required math, providing intuitive explanations as they progress into key theories. Finally, based on an in-depth analysis of existing work on generative modeling, the sharing suggests guidelines that can help with theoretical machine learning research, aiming to provide practical implications for participants.

In recent years, we have witnessed the rapid development of AIGC, which has been widely used in various industries due to its excellent generative ability.

Closely related to this, the theoretical foundation behind AIGC, Generative Modeling, has received more and more attention from the academic community. Much of the theoretical work on generative modeling has been translated into a number of models that perform best in applications. Some generative models have been widely used to generate high-fidelity images, synthesize realistic speech and music clips, generate high-quality videos, and so on.

Although there are countless studies on generative models on the market, they can all be described in the same statistical and probabilistic language.

Some of these widely used models even have tight, inter-translatable relationships with each other, but the mathematics of these relationships are rarely presented in an intuitively easy-to-understand way.

Meanwhile, in industrial applications, engineers often view different generative models as parallel paths to each other and do not pay attention to the connections between different generative models; on the other hand, part of the theoretical research in machine learning focuses only on verifying the reliability of the experiments rather than guiding the experiments in the direction of the theory.