The Prose of Proteins - A Lesson in Taste and Vision through the Work of Brian Hie
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
In research, there is something captivating about witnessing a scholar's purposeful stride. Instead of their papers stumbling into ArXiv, their work carries a story. Over the past decades, we've been fortunate enough to see individuals pen their own cohesive message into a body of work. In modern machine learning, Lucas Beyer and crew unified the modeling language spoken by different modalities. Transformers aren't just for machine translation: they have a place in vision too. It got scientists speaking the same language. Frances Arnold's work on directed evolution gave more than novel catalys ..read more
Visit website
Empowering the Next Generation: Machine Learning at Berkeley's High School Workshop Initiative
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
In the fast-paced world of technology, the realm of machine learning (ML) can seem intimidating, and for students who lack coding experience, the journey into this niche field is especially daunting. This issue resonated with our team at Machine Learning @ Berkeley, so we created a new high school workshops division known as GREP (Guided Resource and Education Program) and launched a new program dedicated to creating greater opportunities for students of all backgrounds. At GREP, we're on a mission to make machine learning more accessible and inclusive, one workshop at a time. Our First (of ..read more
Visit website
Measuring AI Freedom
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Evan Ellis The problem of human freedom and free will has raged for all of recorded history. At its heart is the debate between Determinism and Libertarianism, which are the notions that either one course of events is possible, or many are possible and the future is a product of our “will”, whatever that may be. How we answer this question affects our notions of moral responsibility and achievement: if the future is predetermined, how can we hold anyone accountable for their actions? We can draw many similarities between the human mind and computer “agents” in Reinforcement Learning (RL). A ..read more
Visit website
MuZero: Checkmate For Software 1.0?
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Ashwin Reddy Deep learning differs from mainstream software so starkly that Andrej Karpathy calls it Software 2.0. The name points to deep learning’s superiority in domains as complex as protein folding prediction. But I want to argue that deep learning, though it surpasses Software 1.0, still relies on classical techniques. MuZero, an algorithm developed by Google DeepMind, serves as an excellent example of Software 2.0’s advancement. Consider its applications. MuZero’s predecessor AlphaGo defeated champion Lee Sedol in a five-game Go match (Silver, 2016). YouTube found promising resul ..read more
Visit website
How Maximum Entropy makes Reinforcement Learning Robust
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Ashwin Reddy This post explains what Shannon entropy is and why adding it to the classic reinforcement learning (RL) formulation creates robust agents, ones that tolerate unexpected, even adversarial, changes. I’m assuming you’re familiar with general ideas in reinforcement learning so we can focus on intuition for why agents with high entropy are robust, in particular to changes in reward or changes in dynamics. A Brief Review of RL Recall that reinforcement learning aims to solve decision-making problems in games where a reward function r(s,a) assigns numerical points to action aaa in sta ..read more
Visit website
Alien Dreams: An Emerging Art Scene
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Charlie Snell In recent months there has been a bit of an explosion in the AI generated art scene. Ever since OpenAI released the weights and code for their CLIP model, various hackers, artists, researchers, and deep learning enthusiasts have figured out how to utilize CLIP as a an effective “natural language steering wheel” for various generative models, allowing artists to create all sorts of interesting visual art merely by inputting some text – a caption, a poem, a lyric, a word – to one of these models. For instance inputting “a cityscape at night” produces this cool, abstract-looking ..read more
Visit website
Imitation Learning: How well does it perform?
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Surya Vengadesan Imitation learning (IL) broadly encompasses any algorithm (e.g. BC) that attempts to learn a policy for an MDP given expert demonstrations. In this blog post, we will be discussing how one can distinguish this task into three subclasses of IL algorithms with some mathematical rigor. In particular, we will be describing ideas presented in this recent paper that provides a taxonomixal framework for imitation learning algorithms. A key idea behind this paper is to define a metric that generalizes the goal of IL algorithms at large. Simply put, find a policy that minimizes the ..read more
Visit website
Vokenization: Multimodel Learning for Vision and Language
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Aryia Dattamajumdar ? Computer Vision meets Natural Language Processing Vokenization is the bridge between visually supervised language models and their related images. In this blog post we explore the vokenization procedure and the inner works of the model and classification in two parts: The first section of this post is beginner friendly, giving an overview of vokenization, NLP, and its ties to CV. The second section, starting from the procedure, dives deep into the details of the model using weak supervision (more of a summary of the paper). Introduction: Human Learning How do hum ..read more
Visit website
How is it so good ? (DALL-E Explained Pt. 2)
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Charlie Snell DALL-E consists of two main components. A discrete autoencoder that learns to accurately represent images in a compressed latent space. And a transformer which learns the correlations between language and this discrete image representation. In part one of this series, we focused on understanding the autoencoder. Specifically, we looked at a particularly powerful technique for this called VQ-VAE. According to the now published paper though, DALL-E uses a slightly different method to learn it’s discrete representations; they call it dVAE. While the exact techniques are a bit dif ..read more
Visit website
Teaching the Brain to Discover Itself
Machine Learning at Berkeley
by Machine Learning at Berkeley
4M ago
By Ruchir Baronia ? In this blog post, we deep dive into the intersection between Machine Learning (ML) and Neuroscience. In doing so, we will familiarize ourselves with a multivariate approach of classifying neuroimaging data to identify behavior, predict neurodegenerative diseases, and more. What follows is a discussion of the applications, practicality, and ethical basis of neuroimaging analysis through Machine Learning. Introduction The human brain consists of 86 billion neurons 1, all collaborating to form our thoughts, emotions, and memories. It can learn languages, design machines, an ..read more
Visit website

Follow Machine Learning at Berkeley on FeedSpot

Continue with Google
Continue with Apple
OR