Substrate-Independent Minds

Randal A. Koene, Ph.D.
Carboncopies.org

Please note: This is the un-reviewed & unedited preprint version of the article, as submitted for publication in the Australian & New Zealand edition of Issues Magazine. The likely publication date of the authoritative version of the article – in print only – is April 2012. If you wish to cite the article then please reference that authoritative published version. Also, a.) this online version is a bit more complete as the printed version had to be trimmed down, but b.) a scan of the print version is attached below this page for your convenience.

At carboncopies.org, we pull together the expertise and the projects that are needed to achieve what we call substrate-independent minds, SIM for short. Let me introduce SIM, what it means, what it is about, and in particular, its feasibility.

The predictions of futurists; the expectations of technologists and scientists; the anticipation and concerns of philosophers; the epics of science fiction authors and cinema. All of those talk a lot about life-extension, life-expansion, augmentation, brain-computer interfaces, artificial minds, artificial intelligence, and artificial life. What may those things come to mean for the life that we know?

After all, in the end we must come full-circle. Far from any objective universal purpose, it all begins with us, with our wishes, our drives. It is that context where we introduce SIM. What is it? Why would you want it? And how can we do it?

For a moment therefore, let us take a step back and look at ourselves as a species and as individuals.

What are You?

You are a collection of things. You are the result of your experiences. You are your body's sensations and actions. And you are a unique expression of characteristic responses. But all of those exist in only one place. For you, the universe outside does not exist directly. You cannot even touch it.

When you believe you are touching the smooth surface of a table, the protons and neutrons of the atoms that make up your fingers never collide with those of the table. Almost everything is empty space. Forces initiate electric signals in your finger-tips and are carried by nerves into your brain. But you are also completely unaware of that current until the information it carries has been processed within your mind.

Everything that you are, and everything that the universe is, exists to you only through that processing. What is processed in your mind is all that you can be aware of, and for you, all that exists. So, when we say that we want to extend or expand life, what we really mean is that we want to extend or expand that processing in your mind. It is that, which we seek to safeguard.

One way to safeguard the mind is to maintain that which implements it and enables it to function right now, the body that contains the brain, and the environment upon which that body depends. The whole system in which the mind exists needs to be sustained, with no significant changes, no accidents. If the atmosphere suddenly disappeared, the system would fail and the mind end.

The other way is to directly address the processes that are all that we are. Access them. Treat them like we treat valuable information, like mission-critical programs. Keep them safe by making backups. With access, correct problems and offer updates. Run the processes in a fault-tolerant implementation. Run them in implementations suited to new environments and new challenges.

This approach is SIM. Unlike the maintenance approach, SIM is about access. It depends on data acquisition. To be, to exist, the functions of the mind need to carry out their processing on a processing platform, a substrate. But when those functions can be implemented on a variety of different processing substrates, then we have achieved a substrate-independent mind. By analogy, think of programs that are written in platform independent code.

Maintenance or access, both approaches have value. For strategic purposes, to objectively compare their pros and cons, and especially to compare their feasibility in the foreseeable future, we need to look at concrete steps in explicit road maps.

Concrete Steps

The term “mind uploading” has been used to describe a transition from the brain's implementation of mind functions to SIM. Ideally, we would always re-compile functions of mind to make optimal use of a new target substrate. But at present, we do not understand enough about the hierarchy of interacting strategies employed at different cognitive levels of the mind to carry out such optimization. We do understand a great deal more about the principles of the fundamental biophysical components from which functions of mind emerge.

In neuroscience, we have experience identifying mechanistic aspects of neurophysiology, measuring functional responses and determining modulating contributors. While we may not have a complete descriptive catalog of all types of neurons, synaptic channels, and so forth, we do know how to obtain that information in a specific case when we need it. By analogy, it is as if we know how to read out the assembly language instructions of a program from its executable file, even though we do not have an adequate high level description to write an alternative implementation of the same program.

This is why the vast majority of actual research and development towards SIM is focused on the most conservative route, which we call whole brain emulation (WBE). In whole brain emulation, we aim to replicate the functions of neurophysiology and the structure of neuroanatomy that determines the interactions of basic components. The same general method, brain emulation at increasing resolution and scale, is adopted by pioneers on the advanced frontiers of computational neuroscience and neuroinformatics, frequently with previously unimaginable results.

We emphasize once more that the objective of Substrate-Independent Minds may be achieved through a number of different ways. Carboncopies.org is a-priori technology agnostic, and we have identified several conceptually distinct approaches, although the following focuses on the WBE approach.

Basic Elements

Every time when we describe something, make a representation or create a model, we have to choose the basic elements and the scope of the representation. We may consider components at the element level “black boxes” that we need to characterize. Different approaches to SIM choose different black-box levels.

A set of approaches known as Loosely-Coupled Off-Loading (LCOL) attempt to characterize whole person or body behavior. Re-creations depend on sources such as self-report, life-logs, video recordings, artificial intelligence that attempts to learn about an individual, etc.

Another black-box choice is at the level of the brain or parts of a brain. That is a typical choice for approaches that are based on customized tuning of a general cognitive architecture, or that rely on partial neuroprostheses and brain-computer interfacing.

Figure 1: Morphologically detailed neurons (white cell bodies), extending axons (green) and dendrites (red), generated by NETMORPH (Koene et al., Neuroinformatics, vol. 7(3), 2009).

Bottom-up, there are the approaches that opt for black-box levels at the resolution of neurons or even the specific morphology of neurons (Fig.1). Those are the levels used in representations that are based on work in computational neuroscience and neuroinformatics. Presently, representations of that kind are the most concrete and usable in reconstructions for feasible whole brain emulation.

The times at which neurons produce action potential responses, the spike times of neurons, are the currency of the brain. It is the timing of those spikes that determine whether a synapse will be strengthened, weakened or remain unchanged. The timing therefore determines what is learned, which memories encoded, how the system evolves, how we change from moment to moment. Neural spikes are also what drives muscle cells, actuating our ability to move, to react, to speak, to live in interaction with our environment. In effect, and within an acceptable margin of error, it is the timing of those spikes that a whole brain emulation must replicate.

Four Requirements

To achieve WBE, there are four main requirements (Fig. 2). 1.) We need to validate our hypotheses about the data resolution and scope that are needed for a successful reimplementation. 2.) We need to obtain the structural information about what is now known as the brain's connectome. 3.) We need to obtain the functional characteristics of the active components that are linked within the connectome. 4.) And we need a suitable platform on which to re-implement and emulate the functions of mind.

With regards to the first requirement, a modeling resolution is chosen. At and below that level, the functional characterization of elements is key. The elements need to be simple enough so that we can capture all of their relevant behavior. Above the chosen level, structural characterization is key. That defines the interactions enabled by the connectome, which lead to emergent behavior.

Figure 2: The four requirements of whole brain emulation (yellow boxes), and six categories of projects (light blue), each of which can suffice to solve a requirement.

In principle, it is possible to use concurrent functional recordings from many neurons to deduce a functional connectivity map without directly measuring structural details at a higher resolution. In practice, that becomes very difficult in a large system with many possible combinations of input to each neuron. This deductive method treats the connectivity of each neuron as residing within the black-box, thereby representing a much more complex element. Fully characterizing the transition functions of such a black-box requires a very long period of observation or perturbation for sensitivity analysis. Latent functions are easily missed.

Similarly, it is in principle possible to reduce the number of different types of measurement to just morphological measurements, a three-dimensional reconstruction. Hypothetically, the morphological features of neurons allow us to map them to functional categories and to estimate their parameter values within such categorized representations. In practice, that relies crucially on a procedure and a catalog for one-to-one category mapping, and it may be highly susceptible to systematic errors. Neural networks are famously robust to random errors, damage and noise. That robustness does not extend to systematic errors, such as spatial measurement errors caused by deviations or wear of instruments. Having no means of verification via a different type of measurement greatly complicates error correction.

If we attempted to fine-tune parameters or correct errors in a network of 86 billion neurons, each represented by up to 10 thousand electrical equivalence compartments with 10 parameters each, just by observing the gross behavior of the completed system, then the sheer combinatorial size of the optimization problem would easily exceed the capabilities of any computational system, classical or quantum. It is therefore essential to shrink that problem, to isolate and tune small sub-systems, so that they reproduce reference responses, as measured during functional characterization at that resolution.

In any case, good engineering practice tells us that it is unwise, unless unavoidable, to rely on a one-step process in which there is no provision for the verification of partial reconstructions. Acquiring data, then building a full reimplementation and pressing “go” is risky. Having to correct problems in a very complex system, without carrying out smaller steps, greatly increases the degree of difficulty. It is for these reasons, and more, that a practical method for successful whole brain emulation should combine structural and functional measurements at large scale and high resolution.

Solution Projects

The four requirements for whole brain emulation are very concrete and there are solutions that are feasible by applying the capabilities of science and engineering today. Right now, several projects are in stages of preparation or execution. (For details, see http://carboncopies.org and my upcoming article on “Experimental Research in Whole Brain Emulation” in the 2012 special issue of the International Journal of Machine Consciousness.)

The obvious way to acquire a structural connectome is to look at the spatial morphology of cells and fibers in the brain. Electron microscopy provides the resolution that is needed. Automated sectioning and imaging of a brain gives us the scope. Such volume microscopy is actively developed by several groups (e.g. the ATLUM project at Harvard University).

An entirely different solution to the acquisition of the structural connectome is tagged connection inference. There, biological bar codes (e.g., distinct artificial sequences of DNA or RNA) are used to mark pre- and post-synaptic sites throughout the brain. The tags form bidirectional pointers between neurons. After extracting tags at all sites, the sets of pointers provide the structural connectome in terms of synapses between neurons. This biological tool is being developed in the laboratories of Dr. Anthony Zador and Dr. Ed Callaway.

To satisfy the resolution requirements of in-vivo functional characterization of the elements of the connectome we look primarily to the development of new tools that can take these measurements from within. One strategy to manage scale and resolution is to establish a hierarchy of interfaces, reminiscent of the de-multiplexing of signals. Dr. Suzanne Gildert named this category the Demux-Tree approach. An example was introduced by Dr. Rudolpho Llinas, where the edges between nodes of the tree are formed by nanowires delivered through the vasculature of the brain. Flexible nanowires with a diameter of 500 nanometers have been developed at the New York University School of Medicine. Directing the wires into a Demux-Tree remains to be achieved, and a large number of nanowires still displaces significant brain volume.

Here too, there are projects aimed at developing biological tools. These have the advantage that they readily operate at cellular and sub-cellular resolutions, and can do so in vast numbers throughout the neural tissue. A collaboration of laboratories at MIT, Harvard and Northwestern University, with contributions by affiliates of Halcyon Molecular, is preparing the development of such a tool, a Molecular Ticker-Tape (Kording, K.P., PloS Computational Biology, 2011). Functional events, such as the activation of voltage-dependent receptors, will be recorded on biological media, such as DNA. The recordings may then be retrieved from the cells in which they reside.

Explicitly designing processes and systems in biology, while avoiding undesired interactions and downstream-effects is still difficult. Finding the biophysical components for signal detection, achieving the incorporation of those channels, and introducing reliable strategies for molecular recording are exploratory and time-consumptive efforts. Resolution and scale of these biological tools are extremely promising, although an in-vivo method of read-out is a desirable addition.

If we combine the benefits of both approaches, then we operate at sub-cellular scales, while recording in-vivo, retaining only the nodes and not the physical edges of the Demux-Tree. An optimal implementation of that approach was conceived several decades ago by Dr. Eric Drexler, Dr. Ralph Merkle, Dr. Robert Freitas and others, in the form of nanoscopic robots. Nanotechnology is in its early stages and we are not even very good at building macroscopic robots. What we are good at is developing and implementing integrated circuit technology. Shortly, we will describe a project to develop such a solution, a Micro-Neuro-Interface of sorts.

In the brain, the functions of mind are carried out by a highly parallel network of mostly silent, low-power processors – the neurons. Emulation of those functions will be more efficient on a similar computing substrate. That is why the development of neuromorphic computing platforms is of great interest. Examples are the hardware developed in the DARPA SyNAPSE project, the vastly extensible microchip architectures by Dr. Guy Paillet, and results of the European CAVIAR and FACETS projects.

Highlight 1: Volume Microscopy

Taking a closer look at the acquisition of structure data by electron microscopy, we note very promising results. In a 2011 Nature publication by Briggman et al., reconstruction based on serial block-face scanning electron microscopy (SBFSEM) is compared directly with prior functional recording in the same tissue. A second Nature publication, by Bock et al., used the same method to analyze neuronal circuitry in visual cortex. Those are strong demonstrations, but the serial block-face scanning technique is severely limited in terms of the brain volume that it can handle.

Over the past years, Dr. Ken Hayworth has focused on devising tools to solve that problem. The first devices, called Automatic Tape-Collecting Lathe Ultramicrotomes (ATLUMs), constructed in the laboratory of Dr. Jeff Lichtman, could losslessly section large volumes of brain tissue, collect the sections on tape and store them in a library that provides random access for imaging. The data resolution obtained is sufficient to see the individual vesicles that carry neurotransmitter within synapses, which are an indication of the strength of a synaptic connection (Fig. 3).

Figure 3: Electron micrograph of a section of cerebellum, clearly displaying the locations of synapses (e.g. the dark joining line within the red rectangle) and even neurotransmitter carrying vescicles within the synapse (e.g. circles as identified by the white arrow). (Image courtesy of Dr. Ken Hayworth.)

Three-dimensional reconstruction is possible without loss of structure information. The reconstructions clearly show the cell bodies of individual neurons, and the detailed morphology of axons and dendrites, situated within neuronal circuitry with visible synaptic connections. It is important to realize that this is not merely a concept for futuristic developments, but a class of existing tools that can directly solve one of the main requirements for whole brain emulation (see Briggman et al., Nature, vol. 471, 2011). Those tools should be improved to effectively cope with the tissue volumes of a human brain, but that is a manageable technical hurdle. There are no significant unknowns awaiting unpredictable scientific insight.

Highlight 2: Micro-Neuro-Interfaces

We also understand integrated circuit technology and with that can readily build complex hierarchical systems, signal processing, communication and computing networks. Event-triggered operations consume little power, which can be delivered by pulsed infrared transmissions, glucose biofuel cells (Cinquin et al., 2010) or magnetic induction.

An agent with an 8 micron diameter, the size of a red blood cell, will be composed of functional circuitry (Fig. 4), infrared power delivery and communications, and an antenna for passive communications as in RFIDs. The infrared passive communication is being developed by Dr. Yael Maguire, a so-called micro-OPID. Using 32 nanometer IC technology, the agent, which can fit into capillaries of the brain vasculature that supply every neuron, can have 2300 transistors. Those are as many as in the original and Turing complete Intel 4004 microprocessor. With 22 nanometer technology of 2011, the agent can hold 4 times as many transistors, as many as were used in the guidance systems of cruise missiles.


Figure 4: Concept schematic of Micro-Neuro-Interface circuit technology within a bio-compatible casing.

The integrated circuit has to be encased in a bio-compatible or a functionalized bio-active packaging. Although encasing in silicon may suffice, we note that Doshi et al. produced the protein shells for artificial red blood cells in 2009. Once we combine the artificial agent circuitry with a bio-compatible casing, we have a Micro-Neuro-Interface at the scale of a red blood cell.

Even eight micrometres is not ideal. A chip, does not deform the way a red blood cell does. And we want to be able to operate outside the vasculature, in the interstitial spaces, between cells. We can make smaller agents, such as the ones Dr. Gomez-Martinez and collaborators used to implant and operate in cells. Some of the agents can detect specific signals such as current. Some can stimulate or guide others. Some may cooperate to map structure from within. It is a hierarchical team, composed of nodes at cellular scale.

Such agents may operate within and outside the vasculature. Effectively, a hierarchical cloud of measurement computers carries out recordings. Agents with functionalized casings, such as nanovelcro produced by Duran et al., may interface with cell membranes. The larger hubs can distribute power and aggregate collected data for delivery. With such tools for access we can expand to many types of measurement.

If we introduce Micro-Neuro-Interfaces at a ratio of one 2 micrometer agent for each of the 86-100 billion neurons in a human brain, and one 8 micrometer hub for every ten smaller agents, then the agents will occupy about one cubic centimeter, less than 1/1700 of the volume of the brain. In effect, the edgeless Demux-Tree of the hierarchical cloud becomes an artificial processing network that co-resides and operates concurrently with the neuronal network of the brain.

Putting together Whole Brain Emulation Tools

Clearly, there are beneficial ways to combine technologies developed in different projects. For example, the application of protein-based or microbial rhodopsin-based voltage indicators, as developed by the Cohen lab can be a way for Micro-Neuro-Interfaces to optically register voltage changes. Or, high resolution recordings on Molecular Ticker Tape may be delivered in-vivo through agents.

To combine function and structure measurements, co-registration can be achieved in a number of ways. We may use local agent-to-agent topologies together with samples of morphological mapping carried out in-vivo by agents. We may also leave the Micro-Neuro-Interfaces in place, then carry out a volume microscopy in which the sectioned agents will show up at their locations within the tissue.

All of these concrete projects that can solve the requirements for whole brain emulation are based on the combination of present-day technologies. We can plan phases of development and estimate resources.

Of course, there is more to achieving SIM than the emulation of mind functions. A crucial matter is that the mind, as in its original biological implementation, must have a full and rich experience within its surroundings. This is called embodiment. In a sense, we extend beyond our brains, beyond our bodies and into the universe that communicates with us through sensation and interaction. Those input and output transactions must also be provided, but that is a topic that goes beyond the core steps to SIM that are presented here.

In past years, I have made it my responsibility to seek out and bring together the pioneers, the investigators, and to identify the technologies. With carboncopies.org, I put together, maintain and update road maps for WBE and SIM. An essential task has been to spot key pieces of the puzzle that require urgent attention. Now, we are directly involved with and provide objective oriented coordination and communication between projects, insuring that results will meet the requirements and will come together to achieve substrate-independent minds. That accomplishment will give our species the adaptability to handle and the ability to benefit directly from our technological advances, which we will need in order to thrive through impending new challenges.

References

Koene, R.A., Tijms, B., van Hees, P., Postma, F., de Ridder, S., Ramakers, G., van Pelt, J. and van Ooyen, A. (2009). NETMORPH: A framework for the stochastic generation of large scale neuronal networks with realistic neuron morphologies. Neuroinformatics. Vol.7(3), pp.195-210.

Koene, R.A. (2012). Experimental Research in Whole Brain Emulation: The Need for Innovative In-Vivo Measurement Techniques. International Journal of Machine Consciousness. Special Issue 2012, Accepted for Publication.

Kording, K.P. (2011). Of Toasters and Molecular Ticker Tapes. PLoS Computaitonal Biology. Vol. 7(12), e1002291.

Briggman, K.L., Helmstaedter, M. and Denk, W. (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature. Vol. 471, pp. 183-188.

Cinquin, P., Gondran, C., Giroud, F., Mazabrard, S., Pellissier, A., Boucher, F., Alcaraz, J.-P., Gorgy, K., Lenouvel, F., Mathe, S., Porcu, P. and Cosnier, S. (2010). A Glucose BioFuel Cell Implanted in Rats. PLoS ONE. Vol. 5(5), e10476.

Doshi, N., Zahr, A.S., Bhaskar, S., Lahann, J. and Mitragotri, S. (2009). Red blood cell-mimicking synthetic biomaterial particles. Proceedings of the National Academy of Sciences of the USA. Vol. 106(51), pp. 21495-21499.


Ċ
Randal Koene,
Mar 29, 2012, 12:26 AM
Comments