What’s wrong with modeling?
Computational Biology Blog
by Gabriele Scheler
1y ago
A typical task in theoretical neuroscience is “modeling”, for instance, building and analysing network models of cerebellar cortex that incorporate the diversity of cell types and synaptic connections observed in this structure. The goal is to better understand the functions of these diverse cell types and connections. Approaches from statistical physics, nonlinear dynamics and machine learning can be used, and models should be “ constrained “ by electrophysiological, transcriptomic and connectomic data. This sounds all very well and quite innocuous. It’s “state of the art”. So what is wrong w ..read more
Visit website
Language in the brain
Computational Biology Blog
by Gabriele Scheler
1y ago
We need quite different functionality than statistics to model language. We may need this functionality in other areas of intelligence, but with language it is obvious. Or, for the DL community – we can model anything with statistics, of course. It will just not be a very good model … What follows is that the synaptic plasticity that produces statistical learning does not allow us to build a human language model. Weight adjustment of connections in a graph is simply not sufficient – under any possible model of language to capture language competency. This is where it becomes interesting. We ha ..read more
Visit website
The next paradigm: AI, Neural Networks, the Symbolic Brain
Computational Biology Blog
by Gabriele Scheler
1y ago
I don’t think neural networks are still in their infancy, I think they are moribund. Some say, they hit a roadblock. AI used to be based on rules, symbols and logic. Then probability and statistics came along, and neural networks were created as an efficient statistical paradigm. But unfortunately, the old AI (GOFAI) was discontinued, it was replaced by statistics. Since then ever more powerful computers and ever more data came along, therefore statistics, neural networks (NN), machine learning seemed to create value, even though they were based on simple concepts from the late eighties, early ..read more
Visit website
Learning in the Brain: Difference learning vs. Associative learning
Computational Biology Blog
by Gabriele Scheler
2y ago
The feedforward/feedback learning and interaction in the visual system has been analysed as a case of “predictive coding” , the “free energy principle” or “Bayesian perception”. The general principle is very simple, so I will call it “difference learning”. I believe that this is directly comparable (biology doesn’t invent, it re-invents) to what is happening at the cell membrane between external (membrane) and internal (signaling) parameters. It is about difference modulation: an existing or quiet state, and then new signaling (at the membrane) or by perception (in the case of vision). Now the ..read more
Visit website
Functional Principles of Neural Plasticity
Computational Biology Blog
by Gabriele Scheler
2y ago
10.13140/RG.2.2.20967.37282   It is really difficult to conceptualize things in a novel way – when one has been conditioned for decades to believe in the synaptic plasticity theory of memory. I offer a new type of theory, first outlined in the vision statement linked above, which is called a horizontal-vertical integration theory. There could be several such theories. All such theories would agree that each neuron has internal and external parameters, and only external parameters influence their horizontal interactions with other neurons (mostly by electrophysiology). The exchange of in ..read more
Visit website
Two parts of the brain
Computational Biology Blog
by Gabriele Scheler
4y ago
Here is an image from the 10th art of neuroscience contest. It is striking in that it gives a view of the brain, where there are two major parts: the cerebrum and the cerebellum. Of course there are a lot of interesting structures hidden within the cerebrum, but still: Is this a von Neumann architecture, memory and processor? No, we already know that memory is an ubiquitous phenomenon and not localized to any brain structure (notwithstanding the function of the hippocampus). How about a CPU and a GPU? Closer, since the cerebellum is (a) structurally remarkably homogeneous, suitable to performi ..read more
Visit website
Cocaine Dependency and restricted learning
Computational Biology Blog
by Gabriele Scheler
5y ago
Substantial work (NasifFJetal2005a, NasifFJetal2005b , DongYetal2005 , HuXT2004, Marinellietal2006) has shown that repeated cocaine administration changes the intrinsic excitability of prefrontal cortical (PFC) neurons (in rats), by altering the expression of ion channels. It downregulates voltage-gated K+ channels, and increases membrane excitability in principal (excitatory) PFC neurons. An important consequence of this result is the following: by restricting expression levels of major ion channels, the capacity of the neuron to undergo intrinsic plasticity (IP) is limited, and therefore its ..read more
Visit website
Why a large cortex?
Computational Biology Blog
by Gabriele Scheler
5y ago
If we compare a small mouse cortex with a large human cortex, the connectivity per neuron is approximately the same (10^4/neuron SchuezPalm1989). So why did humans add so many neurons, and why did the connectivity remain constant? For the latter question we may conjecture that a maximal size is already reached in the mouse. Our superior human cognitive skills thus rest on the increased number of neurons in cortex, which means the number of modules (cortical microcolumns) went up, not the synaptic connectivity as such ..read more
Visit website
Soft coded Synapses
Computational Biology Blog
by Gabriele Scheler
5y ago
A new preprint by Filipovicetal2009* shows that striatal projection neurons (MSNs) receive different amounts of input, dependent on whether they are D2-modulated, and part of the indirect pathway, or D1-modulated, and part of the direct pathway. In particular membrane fluctuations are higher in the D1-modulated neurons (mostly at higher frequencies): they receive both more inhibitory and excitatory input. This also means that they are activated faster. The open question is: what drives the difference in input? Do they have stronger synapses or more synapses? If the distribution of synaptic str ..read more
Visit website
Antagonistic regulation for cellular intelligence
Computational Biology Blog
by Gabriele Scheler
6y ago
Cellular intelligence refers to information processing in single cells, i.e. genetic regulation, protein signaling and metabolic processing, all tightly integrated with each other. The goal is to uncover general ‘rules of life’ wrt e.g. the transmission of information, homeostatic and multistable regulation, learning and memory (habituation, sensitization etc.). These principles extend from unicellular organisms like bacteria to specialized cells, which are parts of a multicellular organism. A prominent example is the ubiquitous role of feedback cycles in cellular information processing. These ..read more
Visit website

Follow Computational Biology Blog on FeedSpot

Continue with Google
Continue with Apple
OR