The Quest for Better Interconnect
As machine intelligence compute architectures mimic the cortex, the fundamentals of a planar manufacturing process (semiconductors and the solid-state quantum computers of today) bring the interconnect constraint into sharp focus. Today, a cutting-edge chip has 10-13 layers of metal and 30 miles of wires (SemiEngineering). These are the interconnect lines, and if you want to map a 3D construct, like the cortex to an essentially flat chip, the problem is apparent when you consider that the average adult neuron connects to 1,000 others (and 10,000 as an infant). That 1000x synapse-to-neuron fanout means pure biomimicry of the brain implies ~1000 interconnect lines for each compute element.
Pictured here is D-Wave's latest quantum computer interconnect topology, called Pegasus (best seen in this animated GIF). They have evolved from nearest neighbor connectivity to the most-connected commercial system in the world, scaling to 5,000 qubits, as unveiled today (TechCrunch, HPCwire).
In-memory compute from Mythic and QML from D-Wave are already based on massively distributed, memory-centric architectures, much like the brain. I am still on the search for a disruptive breakthrough in interconnect, having first blogged about that as the conclusion here, 14 years ago, as D-Wave was just starting to scale up from their 2 to 4 qubit processor.
From my 2005 post:
“As a former chip designer, I kept thinking of comparisons between the different “memories” – those in our head and those in our computers. It seems that the developmental trajectory of electronics is recapitulating the evolutionary history of the brain. Specifically, both are saturating with a memory-centric architecture. Is this a fundamental attractor in computation and cognition? Might a conceptual focus on speedy computation be blinding us to a memory-centric approach to artificial intelligence? ….
Weaving these brain and semi industry threads together, the potential for intelligence in artificial systems is ripe for a Renaissance. Hawkins ends his book with a call to action: “now is the time to start building cortex-like memory systems. The human brain is not even close to the limit” of possibility.
Hawkins estimates that the memory size of the human brain is 8 terabytes, which is no longer beyond the reach of commercial technology. The issue though, is not the amount of memory, but the need for massive and dynamic interconnect. I would be interested to hear from anyone with solutions to the interconnect scaling problem. Biomimicry of the synapse, from sprouting to pruning, may be the missing link for the Renaissance.”