Solid State Intelligence?

Q: I saw a quote of John Lilly’s in Alan Combs’ book Synchronicity.  I had run across Lilly’s name before (probably when I was reading Deep Spirit, and trying to figure out what research had been done on interspecies communication.)

Anyway, considering that I had just immersed myself in Ray Kurzweil’s ideas, and was thinking about panpsychism and the mind/body problem, I wondered whether or not machines could ever become conscious.  And then, on Wikipedia, I found this interesting note on John Lilly, and I wondered what you think of it:

Solid State Intelligence, or SSI, is a malevolent entity described by John C. Lilly (see The Scientist). According to Lilly, the network of computation-capable solid state systems (electronics) engineered by humans will eventually develop (or has already developed) into an autonomous life-form. Since the optimal survival conditions for this life-form (low-temperature vacuum) are drastically different from those needed by humans (room temperature aerial atmosphere and adequate water supply), Lilly predicted (or “prophesied,” based on his ketamine-induced visions) a dramatic conflict between the two forms of intelligence.

CdeQ: In principle, the idea of machine intelligence is coherent — if we accept a couple of fundamental assumptions or conditions: 1). If we start with panpsychism as a given, then all matter is inherently sentient and conscious, and it is conceivable that a complex machine might rise in consciousness to the level of awareness we would consider “intelligent.”

However, for machines to achieve that, the second condition would also be crucial: 2). that the interrelations between the parts of the machine would be more than merely physical. The parts would also have to be internally related (nonphysically and intersubjectively) so that the whole would exhibit or express a unified consciousness that transcends and includes all its parts. So far, this degree of internal relatedness has been achieved only by organisms through millions, if not billions, of years of evolution.

While it is conceivable in principle that human-designed (or sufficiently programmed and self-learning) machines could achieve a level of internal relatedness and coherence matching or even surpassing that of higher life forms (e.g. humans, dolphins, whales, elephants, parrots, octopus), so far, I see little or no evidence that machines come anywhere close to matching even a single-celled bacterium in terms of internal complexity and coherence.

The problem with Kurzweil (and many other AI enthusiasts — and, it seems, Lilly may fall into this camp) is a fundamental confusion between intelligence and computation. Indeed, machines can (and already do) compute much faster and more efficiently than human brains and are likely to increasingly accelerate that capacity in the coming years and decades. However, computation is a physical process (involving digital circuits) whereas intelligence is a non-physical conscious process, involving meaning. An information processor (computers) is vastly different from a meaning processor (sentient beings). There is a world of difference between the two.

No Miracle of ‘Emergence’

The fallacy in the idea of “machine intelligence” is, typically, a re-run of the mind-body problem and the notion of “emergence.” The assumption is that when computation becomes sufficiently fast and complex  it will, miraculously, transform into “intelligence.” The ontological gap involved in that assumption is exactly the same, and as impossible to bridge, as getting mind from matter — the idea that material complexity of insentient neurons and brains could produce the “miracle” of consciousness.  That “miracle” is inconceivable (as I discuss in Radical Nature — especially in the Epilogue.)

Advertisements