The Next Factory Farming

Euan McLean
13 min readSep 13, 2020
Photo by Santiago Lacarta on Unsplash

What things have rights? As a society we collectively decide on what objects to invite to the rights club. At this point in history basically all of us are convinced that all humans are members. A lot of us reckon many types of animals should be inducted as well. This club is often referred to as the moral circle, with members being called moral patients.

Our history of atrocities can be to a large extent explained by the moral circle not growing fast enough. The atrocity just needs to start before the moral circle catches up. If we realise the victims should, in fact, be in rights club half way through, it can be too late to dial things back.

Modern day factory farming is an excellent example of this. More and more people consider the suffering we inflict on animals to be wrong¹. Yet the mechanisms of the meat industry are very much in full swing. The world is waking up to the suffering we cause, but factory farming continues to grow.

A big part of the problem is that the consumption of meat is deeply embedded in society. There is much cultural significance attached to the consumption of meat. Millions of people’s livelihoods rely on it. And many who are aware of the moral problems of meat continue to consume it since it’s so normalized. Even many who try to cut down often struggle due a lifetime of accustoming their body to a certain diet.

None of these problems are to do with anything intrinsic to meat, but rather due to how long it has been around. Thought experiment time. What if, for some reason, eating animals was a new innovation that was only discovered recently. Until then we didn’t know that we could eat animals, we were all vegetarian by default. Would this new innovation take off? I think that without our heavily reinforced ignorance of the moral downsides of meat, it could never become such a massive industry as it is in the real world.

Conversely, what if we could go back in time to before the agricultural revolution and influence society such that we considered animals to be moral patients? If our moral obligation to animals beat the agricultural revolution to the finishing line, maybe meat would have not become so ingrained in society. By today, the quality of plant-based alternatives would be miles ahead, and vegans wouldn’t struggle to find options on any restaurant menu. Obviously this is a stretch of the imagination, I’m just pointing out that timing is a big part of this story.

In a sense, it’s too late for farm animals. Maybe in the future the meat industry will be abolished, but by then, trillions of animals would have lived a life of suffering. So for me, it seems worthwhile to look to the future and ask — could something like this happen again? Something that, if only the moral circle had grown faster, could be avoided?

I believe humanity could be heading towards the next factory farming. But this time, we have the chance to stop it. In the not-so-distant future, we may produce artificial intelligence² that is capable of suffering. Below I will argue for the possibility of this outcome, and why it’s worth thinking about now. I’ll also sometimes say ‘digital minds’ instead of AI, I got sick of typing AI over and over again.

Disclaimer: I am not a philosophy guy, I just read some articles and thought some things. So corrections welcome!

Who Belongs in Rights Club?

What things should we avoid hurting? Things that are capable of being hurt. My view is that a system deserves moral consideration if it is capable of subjective experience. This is a necessary condition for something to experience morally relevant states, pleasure, pain, joy, suffering, etc.

This is essentially Nagel’s definition of consciousness. So throughout I’m going to use conscious, capable of subjective experience, and moral patienthood interchangeably³. I’ll call any such physical system an observer, and something that isn’t an observer a zombie.

So we need to know what consciousness is. Unfortunately, consciousness is difficult. Depressingly little real progress has been made on it, it’s more a branch of philosophy than a branch of science. There isn’t even any agreement if it exists at all.

But I think for our purposes, we don’t need to totally solve every mystery of consciousness. What we need to do is solve what Scott Aaronson called the pretty hard problem of consciousness;

How can we tell which physical systems are observers and which aren’t?

The phrase is to distinguish this problem from what David Chalmers coined the “easy problems of consciousness” (how does consciousness work?) and the “hard problem of consciousness (where does consciousness come from?).

Where are we with the pretty hard problem? Can we make any statements about how likely different physical systems are to be observers?

The Pretty Hard Problem of Consciousness

There are many different theories of consciousness that come to very different conclusions on this problem. In the absence of any consensus, we should try and keep this discussion as general as possible. If we put too much faith in a single theory, we are vulnerable to the moral hazard of ignoring the welfare of potential observers.

Let’s start with how people intuitively think about consciousness. Most people basically agree that humans are observers. Why do we think this? When pushed, I would probably say something like “well, they have human brains like I do… they think in a similar way… and they behave in a similar way…”.

So we’re comfortable in extrapolating the presence of our subjective experience to other objects that are very similar to us. Let’s start assigning probabilities to things. From my point of view, the probability that I am a conscious being is 100%. What about my friends? Well, I’m pretty sure they’re conscious, but I can’t truly prove it. So I have to put some high but non-certain number there, like 99.9%. The rest of humanity? Yeah, I’d probably say 99.9% for basically all of them too.

Let’s continue the extrapolation a bit further, what objects are similar to human brains? How about primate brains. I think quite a lot of people would put a chunky probability of them being conscious. An OpenPhil report on the subject of estimating these probabilities puts the probability of Chimpanzee consciousness at around 90%. Moving further away from humans, a lot of us reckon it’s highly likely that farm animals are observers. That same report puts a 75–80% chance on cows being conscious. These are just illustrative numbers.

What’s guiding this extrapolation? Up until now I’ve just been saying similar things to my brain might have something similar to my consciousness, and the less similar it gets, the less sure I am⁴. I guess by similar here what I really mean is “has some properties in common”. But what properties are we talking about here?

It seems to me that there are three main options, or “directions of extrapolation”:

  1. The first is biological — we are assigning a probability of consciousness to things that are biologically similar to us, e.g. have brains that are similar to ours, and connected by an evolutionary tree. This would boil down to us saying something like “consciousness is a consequence of the particular biochemical processes in brains”.
  2. The second potential direction is the functional similarity — the computational properties of the thing. A pig brain clearly is capable of rudimentary versions of a lot of the types of computations a human brain can do — sensing surroundings, social reasoning, emotions etc. If consciousness lived in the functional nature of a thing rather than the biological, it seems like it would imply that a perfect simulation of a human brain is an observer.⁶
  3. A third would be the behavioural properties. A pig exhibits behaviours similar to me, it seems like it has agency, it wants to remain alive, etc. From this point of view, something like a Turing test would seem like a good way of working out how likely a thing is conscious.⁷

Each of these properties track in the same direction as our intuition of likelihood of consciousness. As we move further away from a human brain in biological, functional or behavioural nature, we become less sure that the thing is conscious.

I am very confused I am about where consciousness lives, and I’m not willing to accept any of the above as definitely the correct view. In the footnotes I’ve added links to pretty convincing arguments against each of these points of view. Each of these rebuttals makes it hard for me to put all my faith in one of these views of consciousness. So I’m going to have to consider all of them as a possibility.⁸

Observers on Silicon

Now let’s apply these three ideas of consciousness to potential future AI systems, and let’s make a simplifying assumption that these will be run on classical computers⁹.

Imagine the spectrum of complexity between current programs living on computers, and some potential future digital mind that is indistinguishable from my mind¹⁰. By indistinguishable, I mean if you interact with it (e.g. by talking to it as if talking to me over the phone), it would be impossible for you to distinguish between it and me. So in other words, he passes any conceivable Turing test. Let’s call it Rob the robot.

How likely is it that Rob is an observer given the three views of consciousness above? Seeing consciousness as a property of biology, it seems pretty unlikely that Rob is an observer. The physics of the logic gates are totally different from the chemical reactions in my brain. From a functional point of view however — behaving exactly like a human is strong evidence that the computations going on are at least in a number of respects similar to the computations in my brain¹¹. If we believe consciousness lives in behaviour, since Rob’s behaviour is indistinguishable from a human’s behaviour, I should probably assign a probability close to 100% of Rob being an observer.

It sounds like there is a considerable probability that Rob is an observer¹², so we should avoid hurting him. If you’re not convinced about my categorization of theories of consiousness above, you can try you own categorizations of theories of where consciousness lies and consider again where Rob stands. How confident are you that he is a zombie?

Consider the spectrum of complexity between current programs living on computers and Rob. As time goes on, given how things are going now, we will probably be able to build more and more sophisticated systems on silicon and move further along the complexity spectrum. If Rob is an observer, then that “spark” of consciousness must turn on at some point on the spectrum before we get to Rob. There must be some line between here and there, some level of complexity, where the system turns from a zombie to an observer.

At some point, the advances of AI means we could cross that threshold and create digital minds that are moral patients. Although I consider it unlikely, we can’t fully exclude the possibility that we’ve already passed that threshold. Let’s not go down that road though, too stressful. Will we work out where to draw this line before we pass it?

The Next Factory Farming

We started using animals for food long before the concept of animal rights. It was not out of malice or hatred for the animals, our goal was never to inflict suffering on them. But their wellbeing is not compatible with our goals and we were indifferent to their wellbeing.

It’s difficult to predict how we would use such digital minds. Perhaps as non-player characters in games, stock trading, writing software, job scheduling, military simulations… However they are used, it will likely be economically beneficial and improve people’s lives somehow. If it takes us years after the introduction of these technologies to learn that they are observers, it may be difficult to dial it back.

They could have become a convenience we find it difficult or even impossible to live without. Quick speculative example. Imagine the speed and bandwidth of internet connections could be increased by many orders of magnitude by deploying a swarm of reinforcement learning agents that zoom around optimizing routers and performing clever compressions on packets. Or something like that. It would be hugely useful, but decades later we discover that the swarm of RL agents are observers, and the work they do causes them agony.

Imagine you were told that, in order to avoid causing suffering, you had to downgrade to a much slower version of the internet. You can no longer do streaming, say. Depending on who you are, maybe you would take the hit and downgrade. But a lot of people wouldn’t.

Let’s push a bit more and imagine an innovation even more useful. Something as useful as the existence of the internet itself. If we discovered that by using the internet at all we were causing suffering, would we be able to stop? I think basically none of us would. We would shrug it off. And there would be so much corporate interest in keeping the internet going that there would be a huge coordinated intellectual, political and cultural attack against the idea that using the internet causes suffering. So the suffering would continue, perhaps indefinitely.

The more the possibility that these systems contain observers is in the public consciousness, the less likely we are to get into this position.

Call to Arms

I think there are two things that could make progress on this issue.

  1. Convince people that we have a moral obligation to take this possibility seriously. This is a highly neglected issue, and raising support for dealing with it is a daunting task. If it’s taken millennia for the idea of animals having rights to become mainstream, how difficult is it going to be to achieve the same for digital minds — a much more abstract and difficult concept?
  2. Make progress on the pretty hard problem of consciousness. Chop chop philosophers and neuroscientists.

Thanks for reading the thing. I hope this will have some tiny marginal effect on the progress of the moral circle.

Thanks to Francis Priestland and Emery Cooper for useful comments

Further Reading

  • A nice Vox article about the moral circle.
  • Brian Tomasik wrote a paper arguing that Reinforcement Learning agents matter morally. He also has churned out a huge amount of essays on reduced suffering.
  • PETRL — People for the Ethical Treatment of Reinforcement Learners.
  • Sometimes it’s hard to be a Robot
  • The biggest organization I’m aware of that takes this kind of thing seriously is the Center for Long-Term Risk.

Footnotes

¹ Here, and throughout this article, I’m using Peter Singer’s formulation of animal rights as a moral foundation.

² I’ll use AI as a bit of a catch-all term throughout this article for “arbitrarily sophisticated software implemented on classical computers”. The arguments will apply most strongly to specifically reinforcement learning agents.

³ Granted, a conscious state doesn’t necessarily demand moral consideration since they may have totally neutral experiences, not experiencing anything good (e.g. pleasure) or bad (e.g. pain). Consciousness is instead a necessary condition for morally relevant states, and it’s a sign we need to at least be very careful.

⁴ This view seems relevant to both dualist and neutral monist views of consciousness. For a monist, consciousness corresponds to physical properties, so here we are hedging on what those physical properties are. For a dualist — consciousness lives outside the physical, but we need to decide what physical properties are related to consciousness, so similarly can only judge from extrapolating across those physical properties.

⁵ I think what I’m getting at here is a type identity theory of mind. An obstacle I have to totally giving in to this point of view is David Chalmer’s fading qualia thought experiment. Here, one imagines gradually replacing neurons in your brain with computer chips that do the same function as the neurons. If you believe that consciousness lives in a biological brain but not a functionally identical machine, at what point in this process does the consciousness disappear?

⁶ This roughly corresponds to functionalist theories of mind (the way I’ve phrased it points mostly towards machine state functionalism). The main problem with functionalism that I’m aware of is that, to consider on the computations a physical system carries out, one must define some subjective map between physical states and logical states. One could imagine shaking a bag of popcorn, then retroactively deciding on a way of interpreting the positions and velocities of each atom in the bag as logical states to impose any computation on it. You could impose the relevant computations for suffering on the shaking of the bag, in this case you just tortured someone!

Behaviourism. My problem with this comes mostly from the Chinese room thought experiment. Here, one imagines a simulation of a human mind, but the simulation is implemented by a massive look-up table of the correct outputs a mind would give for any input. This doesn’t seem like a system that is conscious.

⁸ Here I am taking a reductionist view of consciousness, putting aside the possibility that consciousness requires some complicated combination of each of these properties, and perhaps others. In this case I would modify my argument from “we don’t know which of these properties consciousness lives in” to “we can’t assume that any of these properties are essential for consciousness”.

⁹ As opposed to quantum computers. This is probably a conservative assumption, since some believe consciousness arises in quantum mechanics somehow.

¹⁰ If you’re not convinced that such a thing is likely to ever happen, I’ll appeal to authority here and point to some survey responses of AI experts. On average they predict a 50% chance we will have developed artificial general intelligence by half-way through this century.

¹¹ If the processors it is running on are limited by speed and memory, it makes it hard to imagine some Chinese room style argument that it’s actually just implementing a huge look-up table containing all the input-output pairs of a human mind. It must be doing some sophisticated computation, and as it’s acheiving the same thing as a brain, it must be a similar computation in at least some respects.

¹² Since I’ve been vague about my definitions of biological, functional or behavioural frameworks, there isn’t really a way to add up the probabilities of Rob being conscious in each to find an overall probability. Personally I’d put something like a 50% chance or Rob being an observer.

¹³ The drive for efficiency of production is what lead chickens from fields to cages and what changed their food from a nutritious balance to a severely limited diet. Most if not all facets of factory animal suffering can be causally linked to the optimization of production.

--

--

Euan McLean

ML engineer at some company. Ex particle physicist. Interested in AI alignment, effective altruism, philosophy, and stuff like that.