Continual learning for Recurrent Neural Networks
Pillow Lab Blog
by aditijha350381956
1y ago
A few weeks ago in lab meeting we discussed ‘Organizing recurrent network dynamics by task-computation to enable continual learning’ by Duncker & Driscoll et. al 2020. This paper seeks to understand how a neural population maintains the flexibility to learn ..read more
Visit website
Neural Network Poisson Models for Behavioural and Neural Spike Train Data
Pillow Lab Blog
by anuththararupasinghe
1y ago
In this week’s lab meeting, we discussed the paper Neural Network Poisson Models for Behavioural and Neural Spike Train Data, which had been presented by Khajehnejad, Habibollahi, Nock, Arabzadeh, Dayan, and Dezfouli at ICML, 2022. This work aimed to introduce ..read more
Visit website
Estimating the similarity between neural tuning curves.
Pillow Lab Blog
by deanpospisil
1y ago
TLDR; we suggest that an existing metric, r2ER, should be preferred as an estimator of single neuron representational drift rather than the representational drift index (RDI, Marks and Goard, 2021) because it isn’t clear what intermediate values of RDI mean ..read more
Visit website
Estimating learnability
Pillow Lab Blog
by deanpospisil
1y ago
TL;DR: you can accurately estimate the hypothetical performance of multivariate linear regression and classification models trained on infinite data with surprisingly little data, , even when the number of samples (n) is less than the number of features or dimensions ..read more
Visit website
Monte Carlo gradient estimators
Pillow Lab Blog
by Yoel Sanchez Araujo
1y ago
A couple weeks ago during lab meeting we discussed parts of Monte Carlo Gradient Estimation in Machine Learning. This is a lovely survey of the topic and unfortunately we only covered a small part of it. The things we did ..read more
Visit website
Tutorial on Normalizing Flows
Pillow Lab Blog
by zashwood
1y ago
Today in lab meeting, we continued our discussion of deep unsupervised learning with a tutorial on Normalizing Flows. Similar to VAEs, which we have discussed previously, flow-based models are used to learn a generative distribution, , when this is arbitrarily ..read more
Visit website
Step-by-step procedure for choosing a learning rate (and other optimization hyperparameters)
Pillow Lab Blog
by benjocowley
1y ago
I’ve been using deep neural networks (DNNs) in my research. DNNs, as is often preached, are powerful models, capable of mapping almost any function. However, after the sermon is over, someone starting to train a DNN in the wild can ..read more
Visit website
Attention is all you need. (aka the Transformer network)
Pillow Lab Blog
by benjocowley
1y ago
No matter how we frame it, in the end, studying the brain is equivalent to trying to predict one sequence from another sequence. We want to predict complicated movements from neural activity. We want to predict neural activity from time-varying ..read more
Visit website
Sutton & Barto Mini-Bootcamp
Pillow Lab Blog
by nicholasroy42
1y ago
This week in lab meeting, I presented a mini-bootcamp covering Richard Sutton & Andrew Barto’s Reinforcement Learning, Part 1: Tabular Solution Methods (Chapters 2-8). My attempt to get through 170 pages of material in 75 minutes can be found below ..read more
Visit website
Lapses in perceptual judgments reflect exploration
Pillow Lab Blog
by zashwood
1y ago
In lab meeting this week, we discussed Lapses in perceptual judgments reflect exploration by Pisupati*, Chartarifsky-Lynn*, Khanal and Churchland. This paper proposes that, rather than corresponding to inattention (Wichmann and Hill, 2001), motor error, or -greedy exploration as has previously ..read more
Visit website

Follow Pillow Lab Blog on FeedSpot

Continue with Google
Continue with Apple
OR