LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes
Learning Machines 101
by Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
2y ago
This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to study statistical machine learning, then we must be able to discuss how to represent and compute the probability of an environmental event. It is essential that we have methods for communicating probability concepts to other rese ..read more
Visit website
LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges
Learning Machines 101
by Richard M. Golden
3y ago
This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits impr ..read more
Visit website
LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems
Learning Machines 101
by Richard M. Golden
3y ago
In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed. Learn more by visiting: www.learningmachines101.com and www.statisticalmachinelearning.com ..read more
Visit website
LM101-083: Ch5: How to Use Calculus to Design Learning Machines
Learning Machines 101
by Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
3y ago
This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule for deriving deep learning descent algorithms and how it is relevant to software implementations of deep learning algorithms.  We also discuss how matrix Taylor series expansions are relevant to machine learning algorithm design and the analysis ..read more
Visit website
LM101-082: Ch4: How to Analyze and Design Linear Machines
Learning Machines 101
by Richard M. Golden
3y ago
The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning. Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use linear algebra to analyze and design such machines. In addition, these same techniques are fundamentally important for the development of techniques for the analysis and design of nonlinear machines.  This podcast provides a brief overvie ..read more
Visit website
LM101-081: Ch3: How to Define Machine Learning (or at Least Try)
Learning Machines 101
by Richard M. Golden
3y ago
This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph which is implicitly specified by the objective function. In addition, this week we include in our book rev ..read more
Visit website
LM101-080: Ch2: How to Represent Knowledge using Set Theory
Learning Machines 101
by Richard M. Golden
3y ago
This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling ..read more
Visit website
LM101-079: Ch1: How to View Learning as Risk Minimization
Learning Machines 101
by Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
3y ago
This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but also for suggesting new algorithms for specific applications ..read more
Visit website
LM101-078: Ch0: How to Become a Machine Learning Expert
Learning Machines 101
by Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
3y ago
This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com ..read more
Visit website
LM101-077: How to Choose the Best Model using BIC
Learning Machines 101
by Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.
3y ago
In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probabi ..read more
Visit website

Follow Learning Machines 101 on FeedSpot

Continue with Google
Continue with Apple
OR