Mathematics for ML
Certainly, let's break down the key concepts in the image you provided.
Linear Algebra:
* Vectors: These are arrays of numbers arranged in a column or row. They represent quantities with both magnitude and direction.
* Matrices: These are rectangular arrays of numbers. They are used to represent linear transformations and systems of equations.
* Norms: Norms measure the size or magnitude of a vector. Common norms include the L1 norm (sum of absolute values), L2 norm (Euclidean distance), and infinity norm (maximum absolute value).
* Subspaces: These are subsets of a vector space that are closed under addition and scalar multiplication. They are fundamental to understanding the structure of vector spaces.
* Projections: Projections are operations that map vectors onto a subspace. They are used in dimensionality reduction techniques like Principal Component Analysis (PCA).
* Singular Value Decomposition (SVD) and Eigenvalue Decomposition (EVD): These are matrix factorization techniques that decompose a matrix into simpler components. SVD is used in data compression and dimensionality reduction, while EVD is used in solving systems of linear equations and analyzing dynamical systems.
* Derivatives of Matrices and Vector Derivative Identities: These concepts extend the idea of derivatives from scalar functions to matrices and vectors. They are essential in optimization problems and machine learning algorithms.
* Least Squares: This is a method for finding the best-fitting line or curve to a set of data points. It minimizes the sum of the squared errors between the predicted values and the actual data points.
Optimization:
* Constrained and Unconstrained Optimization: Constrained optimization involves finding the optimal solution while satisfying certain constraints, while unconstrained optimization does not have any constraints.
* Maxima and Minima: These are points where a function reaches its highest or lowest value.
* Convex and Non-Convex: A convex function has a single global minimum, while a non-convex function can have multiple local minima.
* Gradient and Hessian: The gradient is a vector that points in the direction of steepest ascent, while the Hessian is a matrix that describes the curvature of the function.
* Positive Definite and Semi-Definite: These are properties of matrices that relate to their eigenvalues. They are important in determining the convexity of a function.
* Second Derivative Test: This test uses the Hessian matrix to determine whether a critical point is a maximum, minimum, or saddle point.
* Steepest Descent, Adam, AdaGrad, RMSProp, and KKT: These are optimization algorithms used to find the minimum or maximum of a function. Steepest descent takes steps in the direction of the negative gradient, while Adam, AdaGrad, and RMSProp are adaptive optimization algorithms that adjust their step size based on the gradient. KKT conditions are necessary conditions for optimality in constrained optimization problems.
Probability Theory:
* Discrete and Continuous Random Variables: Discrete random variables take on a finite or countable number of values, while continuous random variables can take on any value within a given range.
* Conditional Probability: This is the probability of an event occurring given that another event has already occurred.
* Joint Probability Distribution: This describes the probability of two or more events occurring simultaneously.
* Multivariate: This refers to the study of multiple random variables.
* MAP Criterion and ML Criterion: These are criteria used for making decisions based on probabilistic models. MAP (Maximum A Posteriori) chooses the most probable explanation, while ML (Maximum Likelihood) chooses the explanation that maximizes the likelihood of the observed data.
Let me know if you would like a more detailed explanation of any of these topics!
* https://www.numerade.com/ask/question/in-a-probability-theory-course-we-would-show-that-the-sigma-field-collection-of-events-for-mathcald--05902
* https://github.com/akanksha7/MachineLearningBlogs
Comments
Post a Comment