This talk will be an introduction to the root concepts of machine learning, starting with simple statistics, then working into parameter estimation, regression, model estimation, and basic classification. These are the underpinnings of many techniques in machine learning, though it is often difficult to find a clear and concise explanation of these basic methods.
Parameter estimation will cover Gaussian parameter estimation of the following types: known variance, unknown mean; known mean, unknown variance; and unknown mean, unknown variance.
Regression will cover linear regression, linear regression using alternate basis functions, bayesian linear regression, and bayesian linear regression with model selection.
Classification will extend the topic of regression, exploring k-means clustering, linear discriminants, logistic regression, and support vector machines, with some discussion of relevance vector machines for “soft” decision making.
Starting from simple statistics and working upward, I hope to provide a clear grounding of how basic machine learning works mathematically. Understanding the math behind parameter estimation, regression, and classification will help individuals gain an understanding of the more complicated methods in machine learning. This should help demystify some of the modern approaches to machine learning, leading to better technique selection in real-world applications.