Hole 2021 Paper (Monty)
Sebas, 04 June 2025
The paper argues that while much recent research has focused on developing narrow AI, the path the general artificial intelligence needs another direction. It also states that this recent research focuses on mathematical and logical approaches rather than biologically constrained approaches. That part actually confuses me because how are mathematical and logical approaches not biologically constrained? Perhaps they are referring to variability not found in hard-defined mathematical models? I will ask Dr. Sherif about that.
The other parts are actually pretty straightforward, it talks about the pitfalls of narrow AI (brittle, rigid, greedy). General AI would need less training data, it would manage suprises well, and be able to continually adapt to changes after training.
The paper argues that we need to view the brain as a hierarchical computing system of different levels ontop of one another, to achieve general AI. It makes sense because the brain has hierarchy to it, and different levels have different functions.
The surprise for me (which might be due to my lack of experience in neuroscience), was when they stated that the neocortex generates body movements to change sensory inputs to learn about the world (like Monty!). It was also cool to read that HTM builds models of not only objects but also conceptual ideas. That sentence lit a bulb in my head about the possibilities of HTM. Once I get to the 2023 Sherif paper, I think I will see HTM applied to conceptual ideas.
They talk about how all these past years of computational neuroscience research has not produced a cohesive framework for integrating experimental neuroscience findings, and how the Thousand Brains Theory may be the solution.
One question I have is: how well does HTM and Monty map to neural dynamics?
They argue that today’s AI are missing biological aspects required for general AI, however it still isn’t clear to me how Monty or HTM is necessarily a biologically constrained model. The biological aspects it argues are missing and required for general AI are:
- Realistic Neuron Models: AI systems based on the neocortex need more brain-like connections to other neurons
- Sparse Data Representations: Deep learning systems have high numbers of non-zero elements in dense vectors, which is not characteristic of how data is thought to be represented in the neocortex.
- Reference Frames: They state that reference frames are implemented by the neocortex to store knowledge and map movements onto locations on these frames. This point was interesting to me, I didn’t know that the neocortex could implement reference frames, I will have to look into it.
The paper also argues that three fundamental properties of the neocortex are required for general AI.
- Continuous Online Learning: learning in the neocortex is unsupervised and occurs continuously in real time.
- Sensorimotor Integration: Body movements are generated to change sensory inputs, and build models, to make predictions/detect anomalies.
- Single General Purpose Algorithm: all neocortical regions are fundamentally the same and contain a repeating biological circuit (cortical column) that forms the common cortical algorithm.
I will stop reading here for today and start looking at the tutorials which Mohamed suggested I run through.
I’m gonna send an email to Santiago and Mostafa about potentially meeting with them to gain additional support.
Tags: Brown, Computational-Neuroscience, Models, Monty