By Charles Simon, FutureAI founder.
The coming of Artificial General Intelligence (AGI) will require some algorithms. The backpropagation algorithm and its many cousins have made tremendous inroads and will continue to be extremely powerful. But we’re beginning to see that, like the Expert Systems of twenty years ago, it is not possible to program (or learn) enough cases to make intelligent decisions across the broad spectrum of situations required to be truly intelligent. Symbolic AI algorithms have likewise not generalized well. An alternative approach to AGI, full brain emulation, looks like it’s many decades away. The Brain Simulator II combines facets of all three approaches and eases the development of new ideas.
What is Brain Simulator II
Neurons: Brain Simulator II bridges the gap by providing a basis for trying out new algorithms and learning about the possibilities and limitations for various AGI approaches. At its lowest level, the simulator has an array of neurons using an “Integrate and Fire” model connected with any number of synapses. This is a biologically plausible model, and a powerful desktop computer can support millions of such neurons. Within the Brain Simulator, other models can be selected for groups of neurons, and adding new neuron models usually requires adding only a few lines of code.
Illustrating the use of Brain Simulator II at the neuron/synapse level. This circuit can detect differential firing rates in the “A” and “B” neurons. Such a circuit could detect a boundary between areas of different color in the brain’s visual cortex. The colors of the various neurons indicate the state of charge of each. The colors of the synapses indicate their weight with black synapses being inhibitory. While not necessarily optimal, this sort of experimentation can give an idea of the neural complexity of mental processes that we take for granted.
Modules: Modules give the Brain Simulator real power. Any cluster of neurons can be assigned as a module, which is then backed by any desired custom code. Some modules correspond to senses which provide inputs from vision, hearing, touch, and (simulated) taste and smell. Other modules handle actions such as speech and motion, either robotic or simulated. There are also modules that simulate environments in either two or three dimensions for development and testing purposes. Two important internal modules are the Universal Knowledge Store (a generalized knowledge graph) and Internal Mental Model with more details below.
At the neuron level, users can experiment with neuron clusters to perform specific functions. For example, before the current Universal Knowledge Store module was created, a simple knowledge graph was created in neurons. The neural complexity of even the simplest graphs demonstrates that the human brain is limited to about a hundred million nodes. Some portion of your brain is likely devoted to an equivalent knowledge graph structure because your brain can answer the similar types of questions (“What is the object you see?” “Can you name other similar objects?”).
Within the Brain Simulator, development is targeted at a digital entity we’ll call “Sallie.” In one application, Sallie can navigate mazes the way a child might. She remembers landmarks, the actions she took, and the outcomes that followed. All this information resides in the Universal Knowledge Store, so Sallie can subsequently recall how to revisit any goal she has previously reached.
In another application, Sallie learns a language the way a child might. This application is interesting in that the Universal Knowledge Store contains no specific language-related information. Sallie learns to speak by experimenting with semi-random syllables akin to baby-talk. Her hearing input is a continuous stream of syllables. Over time, she can learn sequences of syllables that form words and sequences of words that form phrases.
In this application, Sallie receives the sequence of phonemes, “mɑmiændædi.” Based on her learning, she recognizes the phrase “ph6” in the Universal Knowledge Store. ph6 has links to the words she has learned: w0, w2, and w1. There is no data in the Knowledge Store, and the knowledge consists entirely of links.The labels are for programming convenience, so one can see that w0 is “mɑmi” but are not used for computation.
These two applications can be combined. Sallie can learn to relate words to things she sees and touches in the environment and can answer some questions about what she sees.
Sallie can navigate mazes using a technique that is similar to a child’s. The Universal Knowledge Store captures landmarks, and Sallie continuously matches her surroundings to landmarks in a rotation-independent manner so she can recall previous decisions and outcomes. At each recognized landmark, Sallie adjusts her internal mental model, so small errors do not accumulate. Because the Knowledge Store is universal, Sallie can also learn names for specific locations and then can be directed to return to them.
The Cognitive Model
Every AGI endeavor has some sort of Cognitive Model describing mental activities that lead to intelligence. The model currently under development in the Brain Simulator includes facets that are necessary components of general intelligence. Many of narrow AI’s shortfalls relate to mental abilities common to any three-year-old: object persistence, cause and effect, the passage of time, etc. The cognitive model requires multiple senses, an internal mental model that integrates them, and the Universal Knowledge Store, which saves and searches for information in a biologically plausible way. The overall model is detailed in the author’s book “Will Computers Revolt?” which is available free to anyone downloading the software.
Biological plausibility is a guiding principle. For example, it’s obvious that your brain doesn’t store words as strings of Unicode characters. Therefore the Knowledge Store uses alternative, more plausible methods. At the same time, there is no need to exclude features that may make computer intelligence vastly more efficient than the biological. For example, you know the relative directions and distances to objects around you. When you move or turn, your brain can keep track of the objects behind you with vast arrays of “Grid cells.” A computer can do this more efficiently with a few lines of trigonometry.
The overall development roadmap for the Brain Simulator II leads to an end-to-end AGI system. But even if the cognitive model falls short, significant algorithmic advances are possible. For example, many instances within Brain Simulator modules include loops of the form:
foreach (Neuron in SomeCluster) Do something
foreach (Neuron in SomeCluster) Search for something
foreach (Neuron in SomeCluster) Find the best with no exact match
In a neural network environment, these loops can be parallelized for speed and to mimic the operation of brain areas. These loop internals can be generalized into a group of algorithms that can form a basis for future AGI development.
Anyone interested in Brain Simulator II can follow along or participate in the development process by downloading the software, suggesting new features, and (for advanced developers) even adding custom modules. Visit http://brainsim.org for download, introductory videos, and links to the GitHub project site.
Bio: Charles Simon, BSEE, MSCs (@futureai3) is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of “Will the Computers Revolt,” preparing for the future of artificial intelligence, and developer of BrainSim II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code.