Other Brains, Other Beings

"Without consciousness the mind-body problem would be much less interesting. With consciousness it seems hopeless." -Thomas Nagel, What Is It Like To Be A Bat?

After publishing the last post before this one (on an apparently immortal jellyfish) I got an email from a friend who suggested a book: Other Minds: The Octopus, the Sea, and the Deep Origins of ConsciousnessIt's not a terribly long book, despite its complex subject, but it's of great depth, and  moreover it's of a type rather rare these days: a combination of philosophy and science that looks at one of the hardest of hard problems, which is the problem of consciousness.

There are two threads running through it – the first is an open ended narrative, which begins with the question of how life began, and how intelligence emerged gradually from the eons-long and very slow aggregation of different varieties of biological control systems.

The second thread, which runs parallel to the first, is an attempt to understand what consciousness is, and how we can understand it, based on a look at an apparently intelligent animal with a brain so different from our own that it might have developed on an alien world: the octopus. The octopus is indisputably a smart animal – their intelligence is hard to gauge, because they're also (apparently) rather mischievous and playful and in labs, have a tendency to do things like figure out how to break equipment (one captive octopus enjoyed blowing light bulbs by squirting water at them) and escape from their tanks, rather than playing along with intelligence tests.

But how their minds make a picture of the world is hard to imagine, because their brains are so different from ours – the brain itself is a ring, set into the animal's body under its eyes, and its throat passes through the center of the ring (the author, Peter Godfrey-Smith, who is Professor of History and Philosophy of Science at the University of Sydney, points out that this seems to be a design flaw, as it means if the octopus swallows anything that happens to puncture the gullet, the result is a brain injury).

Moreover, the majority of the neurons in the animal's nervous system (the common octopus has around half a million neurons, which is approaching the number found in a cat) aren't in the brain at all, but rather, in the animal's arms, which are capable of a high degree of independent action. The lack of a skeleton means that the body can assume almost any shape and squeeze through tiny apertures, so its internal model of itself must be very different from the relatively well-ordered one that vertebrates have, but their behaviors – they're aggressive hunters, and complex, sharply alert creatures, albeit generally very antisocial – show every sign of an eerily, intuitively understandable mental life. An octopus, in the words of one researcher, seems to look back at you when you look at it, in a way that a fish (for example) does not. Not bad for a mollusk – the octopus is a member of phylum that includes relatively lowbrow critters like snails and slugs and clams, and the octopus stands head and shoulders over its less neurologically sophisticated cousins.

The book got me wondering – again – just what it must feel like to be an animal, and moreover, at what point consciousness arises. This is taking the word "consciousness" in a particular sense – as Thomas Nagel describes it in his famous essay, it's the idea that there is a way in which being something (a bat, an octopus, a dog, a person) feels like something. It seems clear that there is not actually a specific level of complexity at which consciousness suddenly occurs, like the flicking on of a light switch – the idea that at 500,000,000 neurons there's no consciousness but at 500,000,001 there is (for example) seems absurd. But that means that there must be some sort of continuum of internal experience, leading from simple nervous systems to more complex ones – that, in fact, it does feel like something to be a honeybee, or a fly; that it does feel like something to be an octopus, whatever that something is.

So what’s necessary for consciousness? At minimum, there seems to be a need for neurons, which are cells that generate electrical currents – an electrical current is a flow of charged particles, so it can mean the movement of ions (charged atoms) as well as electrons, and the pulsations of ions across the cell membranes of neurons are the basis of electrical signaling in nervous systems. This is complicated by the fact that while signals are conducted along neurons, communication between different neurons is chemical: a nerve impulse, or action potential, reaches the end of a nerve filament and causes a burst of neurotransmitters to be released, triggering another action potential in the recipient neuron. Fundamentally, though, the suite of organized signaling activity that seems a prerequisite for consciousness is, as far as we can tell, produced by neurons, and not by other types of cells.

It's hard to be sure of this, though. If we suppose that consciousness arises from the the suite of signals that neurons produce, then it ought to be possible to duplicate the formal structure of such signals without the particular apparatus that neurons create; you could reproduce that structure with, for instance, piles of stones that are moved around according to a certain set of rules. This seems like a reasonable extrapolation and yet, I struggle to imagine consciousness, let alone sentience, arising from moving pebbles around, irrespective of the sophistication of the rulebook; this may be an irrational prejudice on my part. Is there a certain threshold of speed in signaling that's necessary as well? Can consciousness only exist in a system of signals in which there is a temporal relationship between internal and external events? Maybe – maybe there has to be some connection between the rate at which events unfold in embodied nervous systems, and the rate at which external stimuli unfold, before a an internal narrative that unifies both can arise.

If the action potential is the basic prerequisite for consciousness, then there might be some very weird forms of consciousness out there. Cells that can create action potentials are found across every type of organism, including plants (the action potential may have arisen very early in the evolution of cells, as a way of controlling osmotic pressure inside the earliest cells, by controlling the movement of ions across the cell membrane). 

We don't generally ascribe intelligence to jellyfish, which seem the very archetype of sybaritically aquatic passivity, but there are neurons – is there a way that it feels to be a jellyfish? Although most jellyfish don't exhibit very dynamic external behavior (which doesn't necessarily mean there is no inner life at all, just that we don't see much of the behavior we associate with an inner life) at least one species is almost shockingly alert in its behavior: the box jellyfish. Box jellies are among the most dangerously poisonous animals in the world but they're also geniuses among jellyfish. You may sniff that this is not setting the bar very high, but box jellyfish have a comparatively sophisticated nerve ring connected to sets of eyes – true eyes, with corneas, lenses, and retinas, not the eyespots of other jellyfish – and the nerve ring uses visual input to direct the animal's swimming, which, in contrast to the passive drifting of most jellyfish, has been described as very active and almost fish-like; one species even uses landmarks on shore to help it navigate.

And there are action potentials in some plants, as well. Both sunworts and Venus flytraps are capable of rapid movement when they capture prey (one wonders if some form of consciousness is a prerequisite for active predation; certain it seems impossible to imagine successful hunting behavior without it). The Venus flytrap is fascinating. The "trap" is a bear-trap like arrangement of two flat leaves with long, fine trigger hairs bristling from their edges, and when an insect touches one, an action potential is generated that causes the two leaves to snap shut, trapping the prey. And then the plant does something really amazing: it counts.

If there are not a minimum of five more stimulations of the trigger tendrils, the flytrap won't start producing digestive enzymes (which are metabolically costly to produce, and are only "worth it" if the plant has successfully trapped a reasonably-sized bug). The idea of a plant counting to five is pretty mind-boggling – of course, it doesn't feel itself counting the way you or I would feel ourselves counting, but that merely raises the question of what it actually might feel like to the Venus flytrap, to count to five. There doesn't seem to be the necessary complexity for any sort of subjectivity to arise at all but then, if there isn't a clearly definable minimum structural complexity for the feeling of being like something, we don't know for sure that there is not, somehow, a way that it feels to be like a Venus flytrap.

I've practiced meditation (Zen/Ch'an) for many years, and about ten years ago something very interesting happened. I wasn't actually meditating at that moment; it was a period when I was meditating fairly intensively, though – a couple of hours a day. I was folding laundry, and suddenly, it was as if I wasn't there at all, in the conventional sense – there isn't really any way to describe it; it was as if laundry folding were occurring, not as if there was an I present, particularly, doing anything. I have often wondered if perhaps that mightn't have been a hint of what it feels like to be something, on the most basic level. Perhaps to be a fly, feels something like that: no constructed realities as such, just a bright, clear sensation of presence in the world, at least until the flyswatter descends.

Jack Forster2 Comments