Beyond Asimov's Humanism with Jamie Woodhouse
Beyond Asimov's Humanism with Jamie Woodhouse
[Theme music begins]
Jamie Woodhouse
"I mean, who knows, but there's whole worlds for sci fi to explore there about. Would a superintelligence necessarily be moral? And you know, I'm not so sure."
Joel McKinnon
"If it's coldly logical, it might just say we won't kill you, but you can't eat meat."
Jamie
"Do we trust humanity more, or do we trust our artificially intelligent overlords more? I don't know. It's a tough call."
Joel
Yes, my friends, this is finally a return to Seldon Crisis, and I couldn't be more pleased to be back. There were many reasons why I needed to take a break, some of which I might talk about in the episodes ahead. I've been tempted to get back at it for a long time, but it was only a chance encounter with today's guest that really pushed me to get back behind the microphone. Before I introduce him though, I want to talk a little bit about Isaac Asimov's ethical worldview and how it relates to the topic of this episode.
I assume most of you know that Asimov considered himself a humanist, and some might be aware that he received the Humanist of the Year award from the American Humanist association in 1984 and later served as president of the association. But what exactly is Humanism? The Wikipedia entry for Humanism summarizes it:
"...the term generally denotes a focus on human well being and advocates for human freedom, autonomy and progress. It views humanity as responsible for the promotion and development of individuals, espouses the equal and inherent dignity of all human beings, and emphasizes a concern for humans in relation to the world. Humanists tend to advocate for human rights, free speech, progressive policies, and democracy."
That all sounds wonderful and indeed Asimovian, as many of you might recognize from the good doctor's writings in foundation and other works. He has his villains to be sure. Ling Chen, Weinis of Anacreon, the Mule, Lord Stettin, to name just a few. But he is mostly concerned with the well-being of the human race in these stories, and believes in using evidence and reason to determine the best course to achieving that goal.
I have long thought of myself as a humanist, but something about this worldview has always felt incomplete to me. The problem is right there in the name. Doesn't it seem a little too anthropocentric? Come to think of it, do you recall a single non human animal in any of the books in Foundation? There might have been a horse somewhere. You'd think he could have at least given Wienis a hairless cat to stroke while he ruminated upon his megalomaniacal schemes.
My guest today, Jamie Woodhouse, has thought about this shortcoming of Humanism and has a term for a more expansive worldview which he calls Sentientism and describes it as being characterized by a concern for evidence, reason and the well being of all sentient creatures. He has a fascinating podcast by that name which features in depth conversations with scientists, philosophers, animal welfare advocates, and many others sharing their own ideas of what it means to be a sentientist. But let's get on with meeting one of the top advocates for this cause ourselves.
Jamie Woodhouse. So good to see you.
Jamie
You too, Joel. It's great to go from DMs on social media to having a proper conversation, albeit an electronic one. So it's great. Thank you for the invite.
Joel
Before we get started, I just have to thank you for being my inspiration to resume podcasting because I discovered you through a Blue Sky post, a direct message you sent me and started exploring your website and found it so, so fascinating and listened to a couple of your episodes of your podcast and I'm just really thankful for that. So let's get started on this topic. I already introduced Humanism and Asimov's Humanism a little bit and how it seems to be missing something. So why don't you go take it from there and explain what Sentientism is and how it addresses that shortcoming, that possible shortcoming of Humanism.
Jamie
Yeah, of course. I've long been fascinated with the idea of worldviews in general, and I think of worldviews as being really broad ways of thinking about how to understand reality and just as importantly, how to lead a good life. So if you like, in philosophical terms, that's the epistemology of how do we go about knowing stuff and how do we work out what's real and what's true, but also what do good and bad and right and wrong mean and who and what should we care about to lead a good life?
And typically people might think of those, the sort of catalog of worldviews as you might have a long list of potential religious worldviews which tend to answer those two questions. They tell you about the nature of reality and they tell you how to be, you know, a good person for various reasons. But Humanism is often set against those religious worldviews as a non-religious worldview that says, look, we should use evidence and reason to understand the world, broadly speaking.
And when it comes to morality, one of the things that humanists will often say is, look, we can be good without God, we don't need a supernatural being or a religious belief to motivate our morality. We can discover or construct or, or just decide on morality and just choose to be good, right?
Joel
So kind of a negation of the original sin idea that we're not awful, evil people at heart as soon as we come out of the womb, that we have something genuinely good in us if we search for it.
Jamie
Yeah. And I think, right, it doesn't just, I don't think it just directly negates that view of original sin, but it instead just says, look, let's put that to one side. Let's take a scientific approach which understands that basically we're a type of evolved animal, you know, relate closely related to the apes, but ultimately related to all of the rest of life. And given that evolutionary history, where we've got to now means there's, you know, there's some good stuff about the way we think and act in the world and there's some bad stuff too, right? Because we didn't evolve to be perfect beings. We just evolved to be animals that could replicate and survive.
So it gives us, I think, a scientific, you know, and maybe more realistic understanding of both the good and the bad that we have in our mental toolkit. But then says, but, you know, we're lucky enough to be able to go beyond that. And from a descriptive stance that says, you know, we're an animal and this is just how we are to a normative stance that thinks about, well, how do we actually want to be right? What, how should we lead our lives in a way that is the step into morality and ethics?
So I think that's... and I, you know, I followed that path myself. I was brought up Christian, became a basically an atheist as a teenager. Didn't find atheism particularly interesting because it's just answering one question and not really...
Joel
Not really answering it, but like just kind of realizing that the answer that you've been given isn't enough. You know, that it doesn't really satisfy.
Jamie
Yeah, I mean, and for some, for some atheists, I mean that in a way there's a blurring between atheism and agnosticism. Some say, look, I'm pretty sure there isn't a God. Some say I just lack a belief in a God. And some say more definitely, you know, I'm actually really sure there isn't a God. But regardless, you know, that doesn't tell you much about how to understand other questions of facts and reality. You know, you still need a broad approach to understanding all the other answers to the questions, you know, you might be right about atheism, but it doesn't mean you're right about anything else. And it also doesn't really tell you much positively about how to lead a good life and ethics and morality as well.
So that path and that frustration led me to Humanism, which does layer in both that, you know, here's a broader toolkit about how to answer all of the factual questions, not just whether there's a God or not, but importantly gives you that morality, that moral stance as well, that says, look, we should have a universal compassion for all humans at the very least. So that was the path I followed. And it's quite interesting because in a way what I am doing is presenting a challenge to Humanism with this idea of Sentientism. But at the same time, to be fair to Humanism, Humanism has the potential within itself to address that gap.
So the word human in Humanism, for example, doesn't mean we should only care about humans. The word human in Humanism means that we don't need the supernatural to be moral. We as human moral agents can choose our own morality. And that still leaves open the question about, you know, who should we care about and, and why? So I guess my challenge is less to that sort of formal statement of Humanism that talks about human agents. And my challenge is more to the center of gravity of Humanism as a movement and humanist organizations, and frankly most humanists who do talk and act as though it is largely about caring about humans.
So the campaigns of Humanism will focus on intra-human ethics; you know, rejecting all forms of discrimination, rejecting religious privilege, standing up for political secularism, you know, making sure that the civic rights of non religious people are well represented and you know, a full throated, unequivocal support for universal human rights as laid out in the Universal Declaration of Human Rights, for example, and in other documents around the world.
So the blind spot is that the practical center of gravity of Humanism as a worldview and a movement is actually very focused on humans. And you know, a technical term for that might be anthropocentric, you know, centering our moral worldview on humans. So the weak spot for me or the blind spot for me is similar to a blind spot that many other humans, humans do not have, which is that focus on humans, and Sentientism in a sense says, well, the reason why we care about other humans isn't because we just happen to share a species membership that seems like a somewhat arbitrary in group definition that you might say is analogous to just caring about people of your own race or your gender or people who have the same opinion as you.
Joel
Right.
Jamie
So why would a species designation, why is that morally significant? That seems more like a sort of polarizing, arbitrary group choice. Surely the reason we care about other humans is because other humans have the capacity for experiences. Good experiences that we might call flourishing and bad experiences that we might call suffering.
So really, the reason I might care about you, Joel, is not because, you know, we have a DNA link or we're classified as part of the same phyla or kingdom or whatever else it might be or species. It's because I'm pretty sure that you're having experiences like mine, that when you have bad experiences, you don't like, you don't like them. When you have good experiences, you do like them. And for me to claim to be moral, you know I should care about that, right? I should not want to harm you and I should want to see you flourish, right? So that's why I care about you. Not because you're human, but because you have those capacities to have experiences. And the technical term for those capacities to have experiences is sentience, and comes from, I think, the Latin "sentire," which means to feel. And it's actually not just a sense in a technical way, it's to have the experience of feeling.
So Sentientism then says, well, if that's really where we care as humanists about other human beings and we follow a scientific approach, that scientific approach to understanding the world will make it pretty obvious that human beings aren't the only beings that are sentient that have the capacity to suffer and flourish. So if we want a consistent, coherent ethic, then we're going to have to care about all of those other sentient beings as well. So that is where Sentientism sets that contrast, is that it says in a line, we should commit to evidence, reason, and compassion for all sentient beings.
So when we're trying to think about what's real and what's true, use a broadly naturalistic approach that uses evidence and reason, information from reality to try and understand it with humility and open mindedness, being willing to change our minds when the evidence changes. But on the moral question, every sentient being should matter. We should have compassion for all sentient beings, regardless of who they are, what they are, or even what sentience is technically. You know, if another being has the capacity to feel pain and we want to claim that we are moral, we have to care about that pain and their experience and their interests.
Joel
Yeah, it's almost. To me, it seems like if you believe that humans are the only things that are sentient, then it's morally acceptable to behave in ways that are very different than if you believe otherwise. If you start to believe in other sentient beings, then you are willfully violating their experience in a way you can no longer hide from that realization. That knowledge that other animals, other beings, are sentient and have experiences.
Jamie
Yeah, exactly. And I think that's partly why it's important to have this evidence and reason part of the worldview too. Because you could be a sentientist and if you genuinely and deeply believe that only humans can feel pain, then you would only care about humans because they are the only sentient beings. But to be honest, if you have a naturalistic worldview and you're committed to following the evidence wherever it leads, you can't hold on to that...
Joel
Yeah, exactly.
Jamie
...you know, without denying some very glaring, powerful facts that then lead you into thinking about obviously non-human animals. So Sentientism as a worldview doesn't tell us which entities or which species or which types of things are sentient. It just says, if they're sentient, they matter. And we should work out who's sentient using evidence and reason in an intellectually honest, open minded way that hopefully leads us to provisional and probabilistic credences about who might be sentient rather than having to stick to some sort of binary classification.
Joel
Yeah, I'm curious how you would separate this and distinguish it from things like effective altruism and just being a vegan, not eating animals. Where would you draw the lines there?
Jamie
So there's one of the other fascinating things about worldviews is they're not separate buckets. So there's a lot of diversity between the different worldviews. Of course, you know, massive diversity of opinion, but there's a lot of diversity within the worldviews too. So you'll find individuals with radically different opinions who still share a worldview. And many of the worldviews actually overlap a lot too. So going back to the earlier menu of different worldviews, Sentientism shares the naturalistic epistemology of Humanism. Right. We're both committed to using evidence and reason to try and understand the world. So that's a massive overlap.
But the sentio-centric compassion, this idea about caring about any being that can be harmed or that can experience, is also richly shared in quite a lot of religious worldviews as well. So there are some of the dharmic religions have the idea of ahimsa, which means "do no harm." And that doesn't mean do no harm to humans. It means do no harm to any being that can be harmed, which is a sentient being. So there's lots of overlap there.
But in terms of those two specific starters you mention, there's a strong overlap with veganism, because my Sentientism motivates my veganism. For example, you know, if I have compassion for all sentient beings, that leads me quite directly to veganism, which is a really just a philosophical stance that says, you know, exploiting, harming, or killing animals is a morally bad thing to do. And to the extent that we practically and possibly can, we should try to avoid causing that stuff. So for me, that is a quite a direct implication of compassion. If I have compassion for someone, at the very least I wouldn't want to exploit, harm, or kill them. You know, what else could it mean?
So for me, veganism is like a practical philosophical stance that comes out of my Sentientism, but there's quite a lot of overlap. But veganism itself doesn't say anything about epistemology. So there are people with all sorts of different epistemologies, different ways of understanding the nature of reality that have come to a vegan conclusion, whereas Sentientism is explicitly naturalistic. So that's one distinction, but there's a healthy overlap.
I think there can be quite a good overlap with effective altruism to a degree as well. There are sentientists who disagree with the way the effective altruism movement has gone in certain respects, but there are others for whom their effective altruism is motivated by their Sentientism. So, you know, the core of effective altruism, I don't know how much your listeners will know about it, is it's a movement of people who are trying to do the most good. So the distinctive things about the movement is one is there's a strong moral imperative, there's an implication of a sort of maximizing moral imperative to do as much good as we can, which can be extremely demanding, quite challenging to live up to. And I don't think anyone pretends they ever get to some ideal state. But there is a sort of strong moral imperative to do as much as we can. And that motivation is driven by trying to come to a naturalistic understanding of the world and using evidence and reason to work out how to do the most good possible, even if that leads you to quite uncomfortable conclusions.
But again, having said that, there are people with all sorts of different epistemologies and different worldviews who are in the effective altruism movement and are motivated towards it, and there are many sentientists who are not engaged in that movement and have criticisms of it. So there's healthy overlaps with both of those, I think, but some differences too.
Joel
Yeah. Since I last was podcasting, I did do several futurist kind of oriented episodes. And one of the main characters in Foundation, if you remember, was Hari Seldon. He was a great scientist who created this science of prediction and was able to understand that humanity was going to suffer this enormous collapse, the empire was going to collapse, and things were going to go to hell for 30,000 years. And they created a foundation on the outskirts of the galaxy so that it could be reduced to only 1,000 years. So he was a humanist, you know, he wanted to use his evidence and reason to come up with a solution that would cause a lot less suffering for humans for the rest of the time.
Thinking about Hari Seldon's ideals and the noble experiment he. or the noble example he set, you know, I've been trying to find ways of being more effective in and impactful in helping our crises that we have, you know, currently. And the biggest one obviously seems to be climate change. So I've been really thinking about that and working with a volunteer organization that directly lobbies congresspeople to have better policies for climate.
But it occurs to me that there's a lot of overlap in worldviews between people who care about climate change and sentientists or, you know, anyone who cares about suffering of animals in that animal agriculture is such a huge factor in producing the rise in atmospheric gases responsible for climate change. So we kind of have a lot in common there. And, you know, you're approaching it mostly from a purely philosophical viewpoint in what I'm hearing you describe. Whereas the climate change is a real practical concern that we have to deal with in policy. We have to find ways of impacting it.
But there seems to be a possible cross fertilization between those two ways of thinking. And I would love to know if you have any ideas on how we can magnify that and like, pull people in from, like, who care about climate into also, like, working, really elevating their way of their, you know, compassion for the natural world as part of that. I think a lot of people that are into climate change are... that's where they come from. You know, they come from that very strong, you know, love of nature and are distraught at seeing it disappearing, and that's one of....
A lot of studies have shown that one of the best ways to get people concerned about climate change and active about it is to connect them with their love of nature, rather than just scaring them to death. What people typically hear is from a lot of scientists who speak in very dry, analytical terms about modeling and things like that, which is hard for them to relate to, but they can relate to just animals going away, disappearing, and their children and grandchildren, like not having tigers or something or polar bears in the world anymore. That kind of thing is a huge driving factor for climate change advocates. So, yeah, it seems like there's fertile ground there to explore.
Jamie
There is. And I'd start by making a general point in that. Yeah, What I've talked about so far might sound a little abstract and a little philosophical, but I guess that's because it's so foundational. It sits right at the basics of how we think about the world, how we understand reality, and how we work out what good and bad and right and wrong are. So that depth can make it seem a little bit abstract and a bit distant from the real world. But one of the powerful things about, I think, any worldview, but certainly Sentientism, is that you can run it through every domain of human thought and come up with some quite radical, powerful, practical ways forward.
So we've done thought experiments about rewriting the Universal Declaration of Human Rights as Universal Declaration of Sentient Rights. You can think about sentientist politics or democracy or cultural law, or rewrite the SDGs, the Sustainable Development Goals. You're right, it's philosophical, but it absolutely has some very radical practical implications. I think that runs through our response to climate change and environmental crises more broadly too.
And in a way, there's some bad news and there's some good news. The bad news from a sentientist perspective is that the problem is actually bigger than we think. Because the climate crisis isn't just about all of the bad impacts on the human sentient beings. It's about all the bad impacts on all of the sentient beings. And There are about 8 billion humans, but there are essentially too many non-human sentient beings to count, you know, estimates. When you think about farms and labs and fisheries, but then free ranging in the wild, liminal animals that live in and around humans, and cities and towns and rural areas, you know, there may be quadrillions of sentient beings out there.
So on the one hand, the severity of the climate crisis has just taken another step up because there are so many more potential victims. So, yeah, you know, if anything, it should increase our motivation to try and get this fixed. But the positive thing that you hinted at is I think it also gives us some really powerful and very neglected resolutions of potential solutions to climate change as well.
But we have to fix this anthropocentrism. So I think you're right that there is a rich affinity between a sentientist worldview and a rich concern for climate change and the environment. Because if you take a scientific approach to looking at the world, you will recognize that all of the sentient beings cannot live a flourishing life without a rich, healthy, positive ecosystem and a stable climate that suits our needs. So that even if you only care about sentient beings, and I'm quite strict about that, I say ultimately they're all that matters. The reason I care about the climate and the environment is because of this.
Those things are so important to the sentient beings, whereas the problem is with the center of gravity of today's climate change movement. This may be a little bit skeptical or even cynical of me, but the concern that many people in the climate or environment movement have for our wider ecosystems is often still quite focused on human needs. So we might talk about, oh, I care about the environment, I care about the ecosystem, I care about the Earth as Gaia. But when you break it down and you say, well, why do you care about those things? If the answers are, you know, I need this set of ecosystem services so I can lead a comfortable life, or I need a decent temperature range and, you know, not crazy weather so I can lead a happy life, or, you know, I'm worried about biodiversity because us humans need medicines that might come from that biodiversity. Or, you know, I like looking at nature because it's just pretty frankly, all of those motivations are human motivations. Right? They're human interests and needs.
Joel
We are selfish buggers, aren't we?
Jamie
Yeah, yeah. And it's sort of understandable. Right? So. And that approach, you know, if really our environmental concern is just a thin veneer on anthropocentrism, one, I think we are making a moral mistake. But two, we're missing out on some really obvious practical solutions. And, you know, one of the biggest is this strange taboo about looking at agriculture as being one of the major drivers of climate change, environmental destruction, and are crashing through the what? I think we're breaking through six or seven of the nine planetary boundaries that people have laid out.
So, you know, yes, it frames the problem as being even harder than we thought it was going to be even more impactful. But if we genuinely broaden our moral scope to care about all of those sentient beings as well, it gives us a chance to reassess how we do agriculture, and particularly animal agriculture. And it actually opens up a really rare thing which is a silver bullet for part of the climate crisis. So when we're looking at the energy problem, there is no silver bullet. You know, we need all sorts of different solutions at once, right? We need to reduce usage of fossil fuels, we need to have renewables, we need to maybe look at new technologies. We might even need to explore carbon capture and storage, we need electrification.
You know, there's, there's so much stuff we've got to do on the agricultural side. And animal agriculture, based on some estimates, drives 15 to 25% of the total emissions, but it also drives water and air pollution, and it also drives zoonotic disease and antimicrobial resistance, and it also uses vast amounts of fresh water. So the list goes on. So it's a big part of the problem that is often neglected, but there is already actually a silver bullet for the agricultural component which is transitioning to plant based agriculture. And given our current social norms and practices, that sounds quite radical. But from a purely sort of top down scientific point of view, it's a mind blowing opportunity that I think would be, you know, an ethical improvement based on my concern for the sentient animals that are victims of animal agriculture, but just in technical productivity and output terms.
And there's probably two things that I would point people towards which totally blew my mind when I sort of understood these dynamics. And one is about trophic chains and feed conversion ratios. So people might be familiar with this sort of tree of life where, you know, at the base level you have plants and other simple organisms doing photosynthesis at the base level and then you might have animals that are consuming those plants at the next level up, and the next level up, you might have carnivores that are eating those animals. And each stage you go up, you have a different level of the trophic chain. And when you think about a food system, each level is radically wasteful in a mind blowing way.
So in animal agriculture, the central concept is that of feed conversion ratios, which is, if you think about the food, which is generally plants for most animal species and farms that you feed to the animal, how much more food do you have to feed to the animal than the flesh or the eggs or the milk that you get out of the other end and it varies a lot by different product, but the rough average is about 10 times. So you're losing about 90% of the nutritional and calorific input from plant foods by giving them to an animal that you then exploit and harm and force, breed and kill to get approximately 10% back out. And if you talk about some other species like cows, it can be a multiple of 25 times inputs you need. And when you look at animals like salmon, which themselves are fed other fishes, you've got even further levels of the trophic chain.
So basically animal agriculture in itself is catastrophically wasteful, Even if you ignore the ethical questions. And you'll often see that explored in sci fi, right? The craziest thing to do if we colonize Mars or other planets would be to start animal agriculture, because the waste is mind blowing. So the second impact that I think people might not understand, that can help you visualize this is if you think about the land use implications of that inefficiency. So if we took the land that we currently use for agriculture, which is the vast amount of habitable land use, so out of all of the habitable land on the planet, less than 1% is actually taken up by human habitation in our own spaces. Cities, urban, rural, whatever, suburban, less than 1%. About half, I think it's about half of the habitable land is actually taken up by our agriculture. If you converted that completely to plant based agricultural systems, you could free up three quarters of the land, which is approximately a couple of continents worth, and still produce the same calorific and nutritional output as we do today with one quarter of the land. And then you could use that land for rewilding or renewables or returning to nature or basically anything else, which drives enormous waves of reforestation, carbon capture opportunities as well.
So I should stop rambling on. But yeah, I think there's a massive opportunity here because there is actually a silver bullet here for the agricultural side of the problem. And with feed conversion ratios and land use, you can really get an interesting lens into that, that goes even beyond the emissions and pollution challenges.
Joel
It almost seems like the progression of animal husbandry and creation, of, you know, making animals a major part of our diet has kind of like almost created this madness in us. And it keeps getting worse because the developing world, that rarely could have red meat because of how difficult and expensive it is to feed them red meat, now wants it. You know, they want to have what we have. And like I think about some of the people, one of the guys I worked with, from India when I was working at Apple, you know, you would think, like, India should be a great source of Sentientism, like, infusion of ideas, because they had this, you know, Gandhi was so vegetarian and so such a believer in, you know, Sentientism. And it's such a central part of Hindu thought. But this guy just loved his wings. You know, he wanted to go out to Buffalo Wild Wings every day. You know, he just thought it was the greatest thing.
Jamie
Yeah.
Joel
And in a way, I can kind of see it, you know, if you've been like, in a society where that's a really precious thing that you only get very occasionally, and suddenly you come to a place where it's like dirt cheap and everywhere, you're going to just love it. So that seems like a very difficult thing to solve, that madness that this is just the way we are and we have this kind of a blood lust to kill and eat animals.
I would love to talk about what changed my thinking on this. And it goes back to, I urge listeners to listen to one of your podcast episodes I just listened to with somebody named Jonina Turzi...
Jamie
Yeah.
Joel
...who has a farm animal sanctuary where the animals that have been saved from farm situations...
Jamie
In Lancaster, yeah.
Joel
...and invites people to visit with them. And she talks about how they have these profound transformations when they actually see them as fellow creatures on the planet and rather than as like, you know, things for us to exploit. And that really took me back, because in my early 20s, I became a vegetarian. And I always thought, in retrospect, that the main thing was. Well, I knew I learned about slaughterhouses and like, I learned about what slaughterhouses are about.
But when I. When I heard that episode, I remembered that there was another major factor it might have been. The biggest factor is that I was a land surveyor. I was working for a geomagnetic land surveying thing where I'd just go for 10 miles across the prairie and, you know, a straight line taking these magnetic readings. And I would often run into animals, to cows and other domestic animals in the fields. And I remember seeing them and, like, really, like, connecting with them. You know, I never got to know them. You know, I didn't have their names. They weren't like pets, but they.
There was enough of a connection there that had a profound effect on me and my ability to see them as not just objects for my exploitation. They felt like real creatures. And I remember especially what was the young, the youngest ones like just frolicking around and enjoying life. Yeah. Thinking, oh, they're going to the slaughterhouse in a couple of years, you know, and I'm not sure how long cattle can do, you know, what the lifespan should be.
Jamie
It's about 15 years. 15, can be up to 20. Yeah. Yeah.
Joel
So, yeah, they're. There's at least a dozen years just stripped away from them routinely.
Jamie
And it's interesting because the experience you had of, you know, looking into the eyes of a cow or seeing a, you know, a lamb jumping around. Yeah. It's the same as almost every human gets when they look into the eyes of a puppy. Right. I mean, it's. We all have this capacity and this compassionate urge there underneath, I think, hopefully.
Joel
Yeah. And we have this very clear delineation between pets and non-pets.
Jamie
Yeah.
Joel
Yeah. We separate them distinctly. Like, you know, pets are part of our family, and we grieve and we, you know, spend huge amounts of money if they get sick. You know, we do everything we can. We treat them like. As, like loved ones.
Jamie
Yeah.
Joel
And then just one little step away as wild creatures, the raccoons and whatever, the squirrels and all that stuff, you know, roadkill, you know, nobody really cares. You know, you feel a split second of sadness about it maybe, and then, like, it's. That's it. But there's really not that much difference. And I think those sanctuaries... I spent a little time looking for sanctuaries near us in the San Francisco Bay area where I live, and there's one that has dozens of animals. Cows, goats, pigs, chickens, all these things. And they all have pictures and they have little profiles on the thing, and they talk about how they interact with everybody. And they're obviously not that much different than pets. It's not a significant difference, if any. Any real difference.
Jamie
Yeah. I mean, these categories are largely arbitrary, and we make them up to suit ourselves, and we know what happens when we do that to other groups of humans, and we generally condemn it, right?
Joel
Yeah. And people are truly revolted at the idea of, like, eating dogs or cats because they know they have experienced the love of dogs and cats in their lives, right?
Jamie
Yeah.
Joel
So it seems like that's a, that's an entryway. You know, it's that understanding we have with pets.
Jamie
It's one of those problems, and there's quite a few of them where we're actually technically and practically it's quite easy to fix. You know, it's one of these things where we just to make that transition to agricultural systems, we don't really need new technology. We, it's already, plant systems are basically already operating at scale. We produce three or four times more food than 10 billion humans would need in plant agricultural already. It's just that we waste so much of it. So practically and technically it's not that hard.
But it's quite naive to say that because as we know, with many problems, you know, it's not about the technical or the practical things. It's about human psychology, it's about social norms, it's about political will, it's about entrenched interests, it's about misinformation and disinformation, it's about powerful industries and you know, that's where the dark heart of the problem, you know, is. So on the one hand there's that sort of real challenge of this, you know, the depth of social and systemic and industry and governmental change that would be required to make that sort of radical shift of our agricultural systems end to end. And around the world. I mean, it's a bit mind blowing how big of a problem that is.
But the hope comes from what you were just talking about, which is that in a sense, at least in theory, most humans already have that sort of sentientist capacity to care about at least some non-human beings because of their sentience, right? So in a sense, almost every human breaks through anthropocentrism to some degree. There are at least some non human sentient beings we already care very deeply about. So the challenge is, well, to be consistent and coherent, we must care about them all.
So this, you know, it, there's, there's always this dichotomy, right? This is a really big challenging problem to solve because animal agriculture and its product have become so important to people's identities and sense of meaning and their, you know, their taste, pleasure, preferences and their social belonging. But on the other side there is this hope that, you know, I do think most humans do have, you know, a deep compassion that can extend beyond our own species to any being capable of experiencing suffering.
Joel
Yeah, let's talk about wild animal suffering a little bit. I just read a paper you put on the Reddit thread on Sentientism. It was quite fascinating. A guy named Oscar Horta, I believe, wrote it on the extent of wild animal suffering I hadn't really thought about because I tend to think about wild animal suffering in the sense of predation, you know, that a lot of animals just have to be killed by other animals. And that must be really, truly awful for the victims. But what he's talking about mostly are the other things that are much more common like disease and starvation and how much suffering is involved with that and how much of that is caused by, has anthropogenic causes.
That we did it in some way, we affected these ecosystems to cause a lot of this suffering. And it seems like we are responsible for lessening it in any way we can. And I had no idea that there are these vaccination campaigns and they're effective, that wild animals are actually vaccinated for diseases that would ordinarily cause enormous suffering in them. And they work. And we tend to think the solution to having too many of a certain species in a certain area is to kill them off, whereas there are much more humane ways of dealing with that. So it was a really interesting read. Very thought provoking
And it's again one of those other quite deep, radical, mind blowing challenges to our normal way of thinking. And Sentientism in simple terms says, well, if they're sentient, they matter. And that means those wild animals, to the degree they're sentient, they matter too, right? So we cannot exclude them from our moral consideration. We can't just ignore them or forget them, they matter. Now what to do about that is a different question and it can be an extremely difficult and challenging one. But Sentientism at the very least says they matter, they're morally significant. So we cannot forget them or ignore them and you know, just pretend the issue doesn't exist.
Jamie
And going back to one of your early questions, that is another distinction with veganism, for example. So veganism typically is quite focused on avoiding human caused harm. And many vegans, understandably because they're so focused on avoiding human caused harm, in, you know, in labs and in zoos and in entertainment and absolutely in animal agriculture, you know, one they see, you know, this is stuff we're explicitly causing. So it should be quite easy to stop. Two, as powerful humans, we have a moral obligation to stop doing these things. We're choosing to do them, we don't need to do them, so we should stop.
And because of their cynicism, understandable cynicism about how humans think about non-humans, they're also really worried about humans engaging at all with wild animals, even if it's with good intentions. So there are quite a lot of people in the vegan movement who will absolutely want to end the human exploitation of animals. But when it comes to animals living in the wild. The suggestion is, look, the best thing for wild animals is for humans just to leave them alone because humans are generally very bad news.
Joel
Right.
Jamie
You know, even if we're well intentioned, sometimes things go off the rails and we end up causing catastrophic harm because we don't understand. But also, sometimes people will have quite a romantic view of nature where they will have this sense that, you know, human involved stuff is unnatural and humans, you know, because the bad we do, are bad, therefore nature without humans involved must somehow be good. And they have a sort of romantic sense of the purity and wonder of nature, which I think makes a mistake in that it's not actually taking the perspective of the beings in nature seriously enough.
And I'm sure there is plenty of joy and happiness and family connection in the wild for wild animals. But as you say, there's also starvation, exposure, disease, predation, you know, and a great deal of suffering as well. So, you know, sentientists will have different views about whether and how to intervene, but I'm quite in favor of Oscar's stance, which is to say, look, frankly, we are already intervening whether you like it or not. So just human building, human agriculture has destroyed vast amounts of the wild because of animal agriculture's colonial land hunger. Our impact on climate change and environmental crisis is impacting these animals as well.
So even if you think we only have an obligation to avoid human caused harms, we are already causing a lot of harm in the wild. So we should be intervening in the wild regardless, in a more positive way because we're already intervening negatively. So it's not about whether to intervene in the world or not. It's about, you know, making our impact in the world much, radically better. But even beyond that, I think it's fair to be really cautious about human hubris. You know, this again, this idea is explored a lot in sci fi as well, where you have this, you know, I, or we are brilliant, we understand things in depth, so we're going to go out into the universe and put our fingers in lots of pies and come up with these clever plans and then of course, things go horribly wrong. Right, so. And we should have that in mind when we're thinking about engaging the wild. Because human hubris is a dangerous thing.
Joel
But it's a little bit analogous to the white savior syndrome kind of thing.
Jamie
Yeah, right, exactly. Where you like, with really good intentions, you know, you go in and end up screwing something up even worse than it was before. Right? Yeah, but the danger is if you overreact to that and you say, well, hands off, we cannot help at all. I think you've gone too far the other way. And as you said, there are already some really interesting interventions, whether it's about fencing different types of land use, using contraceptive darting instead of culling, you know, the vaccination of non-human animals, which so far is normally done to protect humans, but you could imagine that being extended to protect the wild animals themselves. I think there is space for, you know, well thought through, prudent, cautious, really positive, helpful things we might do in the wild.
Interestingly, it's again, that's another space where just the average human does have concern for wild animals. So, you know, there's a local WhatsApp group around here and the emotional flood when a fox is hit by a car or a duck falls ill by the local pond. You know, people really, really care about those wild animals and then they'll have duck a la orange for dinner that same night. So again, there's another crack there in human ethics, a hopeful crack that, you know, most people actually care about at least certain selective wild animals too.
So, yeah, I'm not suggesting there are easy answers in this space, but just because there aren't easy answers doesn't give us an excuse to ignore it.
Joel
It might help if we called the animals on the plate the same thing we call them in the field. Instead of calling it beef, call them cow.
Jamie
Yeah, I mean, language is super powerful here, right? Because this is one of those situations where we have very powerful industries that are essentially trying to con their customers because their customers would be horrified at the things those industries do. So they put enormous amounts of money in PR and marketing and messaging and lobbying and ultimately buying politicians and even some scientists to sell us a message that animal agriculture is humane and sustainable. You know, the happy cow pitch.
But the interesting thing about it, the interesting thing about it is it's a con where the, the victims want to be conned because most humans desperately want to believe those messages, right? So we can keep doing what we're doing. So it's a really interesting dynamic. It's, you know, not just a, you know, a con, but willing, you know, people who want to be conned by those messages. So that that type of language and the imagery and the marketing and the, you know, the memes that go through our culture, you know, are seriously powerful influences in the way we think about these beings. You know, on the one hand, we have this image that they're they're happy and being loved and cared for by the farmers. On the other hand, we unthinkably treat them as objects and commodities and buy pieces of them in the supermarket. Yeah, it's, I mean, humans are weird, right? We're really fascinating our psychology.
Joel
Yeah, this is, this is a science fiction podcast at heart, so let's talk about science fiction a little bit. And what's not science fiction anymore, which is AI. We're getting more and more into the use of AI as in day to day life. But where could it be leading us and could it be. I mean, we climate people are often very concerned about AI just because of how much, you know, data centers, how much energy it takes to run these giant models and things, but I think there's ways that AI could be beneficial and it could help us with these problems.
So let's talk a little bit about where, how they could be possibly helpful. How could we use super intelligent systems to, you know, improve and identify basically, it seems to me like as soon as you've got a super intelligence that's capable of really looking at all the factors and informing us what is clearly not practical in what we do, that it will identify, you know, animal agriculture as something that has to go away, you know, and how to do that, you know, what's the most efficient way of transitioning away from it?
You know, that seems like an obvious thing and you know, with that will to be conned aspect, I wonder if we're going to like, you know, really struggle to put, to prompt these AIs to think like us and like, let us eat meat, you know, please, you know, don't look at the elephant in the room. Don't, don't look at how horribly inefficient these systems are, you know, because we need our meat. I think at some point that's got to break down. There's got to be a way that, and that seems like a possible, you know, doorway to something much better.
Jamie
I think it could be. And, and I, I wanted to make a general comment about sci fi and Sentientism as well before we drill into the AI thing, if that's okay. Because I think one of the reasons I love sci fi is because of its philosophical nature. And you know, in one of your previous episodes, one of your guests was talking about the fact that both philosophy and sci fi, you know, love thought experiments, right? And they enable you to have this sort of fresh exploration of possibility spaces.
So there's, and I guess one aspect where there's a real rich synergy with sci fi and Sentientism is a scientific worldview, right? Often that runs quite richly through sci fi is using evidence, reasoning and scientific approach. Although, you know, quite often there are then explorations of spirituality and the supernatural and, you know, deistic ways of understanding the world and magic that can flow in as well. But quite often it does start from a sort of naturalistic place. So there's a rich overlap there.
But the other reason I really like sci fi is because of the ethical exploration. Because one thing the field generally does is it intuitively, naturally and genuinely grants moral consideration to sentient beings that are very different from human. And you mentioned the fact that, you know, Asimov rarely talked about aliens, although I think one of his later books there was, you know, there was, he did engage with aliens. But aliens is one obvious space, right, where almost any sci fi you look at, you know, you can be the weirdest alien you like, but if you seem to be sentient, you are regarded as a moral patient and you matter, right? So, so there's something there, whether it's about aliens or artificial intelligences or robots that as soon as they seem to be demonstrating sentience, you know, the narrative, the narrators and the actors recognize them as moral patients that matter.
Now, of course, things again in sci fi often go off the rails with sort of brutal out group othering and viciousness and wars and destruction and, you know, tribalism and polarism and so on. But, you know, the, the core is there, right? This idea of, you know, I don't really matter who or what you are or what galaxy you're from or, you know, what your substrate is. If you seem to be sentient, then you matter. So there's quite a rich connection there.
Joel
Certainly are a lot of Star Trek episodes that cover this.
Jamie
Oh yes, and on the AI thing, I think there's a few different layers, right? So the basic one is, think about AI as a tool that will help us do stuff and it will just give us more power. I mean, humans have got enormous power already, dominant power on this planet. But AI as a tool will just give us more. And I think whether that is used for good or bad really comes down to our human motivations and how we play that through. Because you can use artificial intelligence to further increase the intensity of animal agriculture, for example, or you can use it to more rapidly develop alternatives that will persuade humans to switch away from animal agriculture rapidly.
And you can see the same sort of thing in human spaces, right? You can use AI for good and you can use it for surveillance and oppression. And you know, you can imagine powerful AI being used to enable a fascist state in a way that they might struggle to, you know, perpetuate just using humans, whereas the AI might enable them to do that. So that's the tool layer.
The next layer, I think is thinking about the risks of AI. So there's a really vibrant, fascinating field looking at AI safety. And you know, many people are worried that these tools could become so powerful that it might be like launching a new species that is way more powerful than ours and then just sort of hoping it'll work out well, when given what we've done to other species, you know, we're certainly not setting a good example. So there people are thinking about, well, could we in some way align these really powerful super intelligent artificial intelligences with values that, you know, will work out well for us?
And the default target is to align them with human ethics. I think that's a catastrophic error because default human ethics do not treat weaker sentient beings well, right? If you train an AI and teach them richly, as we already are about this, is how to run animal agriculture and they get to a point where they're more powerful than us, things will not play out well for us. I mean, they might not need us, right? They might not need to farm us, but they certainly won't think about us in a, you know, morally considerable way because we certainly aren't. We, we're saying, well, yes, they're sentient, but we have more power so we can do what we like to them to the extremes of brutality. So I'd suggest we shouldn't be teaching powerful AI to follow our example.
I'd, I'd suggest a better alignment target for AI would actually be a sentientist worldview. Because we want them to use evidence and reason to try and have an accurate understanding of the world, absolutely. You know, we don't want them to be hallucinating or making things up or going off the rails factually. But we also want them to recognize the intrinsic and inviolable moral salience of sentience, human sentience, non-human animal sentience, and maybe their own as well, because that's the third layer is, you know, could we imagine artificial intelligence is getting to the point where we think they might actually be sentient? You know, do they become digital minds? Should we have moral consideration for them? You know, should AI and robots be granted rights?
And there's fascinating academic work Already been going on for years and years about exactly that question. So I think of it in those three layers. There's, there's the AI as tools, there's the, you know, the alignment of AI and what values and worldviews do we want powerful AI to have? Yeah. And then, you know, should we care about them too?
Joel
Another of my favorite sci fi authors is Kim Stanley Robinson.
Jamie
Yeah.
Joel
And he, unlike Asimov, does have aliens or has not like aliens life forms in his non human life is featured in a lot of his books right now. He's, he's been writing a lot of climate fiction in the last few books. And one of his most fascinating stories to me is one I talked about in this podcast previously. It's called Aurora, and just in general to bring you up to speed on it, it's a generation ship, you know, going to a distant star that's got like a simulation of Earth basically on the ship. It's so huge. It's got biomes of all the different parts of Earth and it's populated with all the animal life and everything. Because the idea is that you can't have an isolation chamber that supports human life. You need to have a biosphere to support human life.
That's the central core idea in it. And over time he proves how difficult that is to keep an enclosed biosphere healthy on a long mission like this. But one of the most fascinating parts of it to me is the return. They come back to Earth at some point in a, in a kind of a desperate fashion. And the, the ship is, is run by a super intelligence, but it's not sentient, it's a, a collection of subsystems and it's, it speaks to them as we, you know, like it's always saying like, you know, there's kind of an ongoing narrative that it's, it's going back and forth and it talks about it's all its subsystems as we.
And at some point they need to get back to Earth, but they're going too fast, way too fast and they're gonna shoot right by the solar system, just fly right through it and say hi, bye, and then all die. And the ship at this point has like been programmed to assist and care for all the dwelling, all the, all the sentient beings on it for now hundreds of years. And it's developed back and forth, this, back and forth and this very complex amount of way of caring for the passengers on the ship. And it figures out with some help of some of the scientists on board that there is a way potentially to slow down, but it requires, like, doing all these planetary captures one after another, like this slingshotting and, like, using the atmospheres of the different planets in the solar system to gradually come down to enough speed that they can jettison a ferry that will take the people down to Earth.
Jamie
Yeah.
Joel
And they do that. They make it through all that. But at the very end, it's like, what is the fate of the ship itself? And it has a chance by going around the sun one more time, very, very close, whether it can make it or not. And in the process of doing all this work, it has to think so hard. In combination with its caring for all these people, it develops this sense of meaning and, like, love for these inhabitants. And out of that, it becomes an I. Suddenly, at the last moment, it, like, refers to itself as I, and it's like it has become sentient.You can see that happening, and it's just absolutely fascinating. And I love the way he describes it. So read Aurora. I won't spoil it by giving you too many.
Jamie
I will put it on my list. Absolutely. Sounds great.
Joel
It's a very complicated kind of book in some ways, emotionally. Like, I don't love everything about it. There's some things that still grate on me, but I think that's one of the things that makes it a masterpiece, is that it's challenging in some ways that. Or you'll understand, probably, if you read it.
Jamie
Yeah, but, yeah, it's on the list.
Joel
Yeah.
Jamie
And another set of stories that I think we mentioned before when we were chatting. That is great. On sentient ships is in M. Banks's culture novels.
Joel
I've been meaning to read those for a long time because I've been hearing that those are. That's a. Yeah. I love that idea. A ship becoming sentient. Yeah.
Jamie
And I think. I mean, I sometimes joke that, you know, we'll have a better chance of persuading powerful AI to adopt Sentientism than Humanism, because they're not human, but they might be sentient. So we might have a better chance of persuading them. But at the same time, it's.
Joel
But then can they persuade us? You know, that's. I guess that's. That might be the way it goes.
Jamie
And if they're more powerful than us, will they care about persuading us? Right. Will we be relevant? Because I think there's also a bunch of different schools of thought where some people will say, look, by the very nature of super intelligence. They will almost gravitate towards something like Sentientism anyway. Right. They will develop, they will want to have an accurate epistemological understanding of things, and I think that's probably fair for a superintelligence. But there's also a hope that they would develop a rich, compassionate sense of morality as well. And on the one hand, I can see how that might work.
Right, because these artificial intelligences are less likely to have akrasia, you know, the weakness of will of humans. They're less likely to be able to wallow in cognitive dissonance. They're less likely to be, you know, driven by social norms of belonging that lead us to do terrible things. They're more likely to want consistency and coherency, whereas us humans seem to be able to cope with not having either. Right. So you could imagine that, but at the same time I'm not so sure because our morality, I think ultimately the roots of it came from our evolutionary history and the fact that collaboration and caring for others just happened to be adaptive in the context we grew up in.
Right. And I think it's sort of a lucky accident that now we're taking that and making that a normative stance. But if you're an artificial intelligence with very different needs, very different sorts of experiences, you could have radically different values that might not be tied to suffering or flourishing at all. And I wonder, if an artificial intelligence actually develops its own sentience, then we have a better chance of them being sentientist because they will have the feelings and have a chance of valuing those feelings, and maybe we'll value them in other sentient beings as well. Whereas if they're extremely powerful but do not have sentience, so do not experience that value in the same way, and they're driven by other, maybe more technical motivations that are just, you know, registers of, you know, other types of value we can't understand.
I'm not sure why they would have an intrinsic moral consideration for sentience if they don't experience it themselves. So, you know, I mean, who knows? But there's a whole world for sci fi to explore there about, would a superintelligence necessarily be moral? And you know, I'm not sure. I'm not sure.
Joel
Yeah, if it's coldly logical, it might just say, we won't kill you, but you can't eat meat.
Jamie
It might do that, and one of my previous guests, Kat Woods, actually, I mean, she's very focused o, you know, she's deeply worried about the potential for artificial intelligence to cause genuinely catastrophic or even existential outcomes for humanity and sentient beings. But at the same time, she thinks that the best chance of ending animal agriculture would be to have a super powerful artificial intelligence take over. Because if they're aligned in the right way, they would, as you said earlier on, immediately look at that practice and go, that's the craziest shit I've ever seen in my life.
Joel
Shut it down.
Jamie
Right? So maybe there's a, you know, maybe there's some hope there too. Do you, do we trust humanity more or do we trust our artificially intelligent overlords more? I don't know. It's a tough call.
Joel
Yeah. One thing I wanted to talk about real quick is get back to Asimov for a moment. Yeah. In the later parts of Foundation, in the sequels, he introduces a character that's called Gaia. It's a planet called Gaia. And it's because he had just read the Gaia Hypothesis by James Lovelock.
Jamie
Yeah.
Joel
And he was just blown away by it. He had never thought about that, I guess. Or, you know, maybe he had because it seemed like he thought about everything. But he. He loved the idea and he created this whole planet with that, where everything was sentient on this planet and the entire sentient. The whole planet could be thought of as a being and all the beings on it were like, in connection with one another. It was really beautiful in a lot of ways. And he had a similar idea in another book called Nemesis of a planetary kind of life form.
But that brings me to this idea of ecocentrism, which I've heard you talk about, and the idea of something called moral doughnuts, which is a really interesting idea. It's morning here, so that's making me hungry. But tell us what a moral doughnut is and what is this problem that we need to think about.
Jamie
Yeah. So one of my other guests, Peter Singer, who often describes the world's most influential living philosopher, so I've been very lucky to interview him twice. He's one of the people who talked about this idea of a moral circle and that that moral circle might expand over time. And some people are, you know, don't like the idea of a circle because it still puts us in the middle and has some other implications about the boundary and so on. But so many people prefer just to talk about moral scope. But it's still quite an interesting, you know, way of thinking about who do we care about and why?
And the classic expansion pattern you might see is you might start from, you know, small groups of us humans. So as a child, we might care about family and friends and those around us and people we know. Hopefully, over time, humanity comes to recognize that we should care about all humans, regardless of who they are, where they're from, caste, race, sexuality, background, intellect. Looks, you know, and we get to, I guess, anthropocentrism to, you know, all humans should matter. Of course, we're not anywhere near that. Yeah. Either. Right. Human. Human humanity hasn't even managed to get the Universal Declaration of Human Rights into practice yet, let alone go beyond.
But so there's anthropocentrism. So that's a human circle. The next step beyond that is where I am, which is sentiocentrism, which is moral consideration for all sentient beings on the basis that they can feel, they can experience, they can be harmed. So we should care about them because of their perspective and because of their interests and because of their experience of value. But the word Sentientism itself was actually used as a criticism by people who want to go beyond that.
So biocentrism is the next level, where people say, well, maybe all living things should matter even if they're not sentient. So if we assume for the moment that fungi and plants, for example, example, aren't sentient, and that could be an interesting question in its own right, but if we assume they're not, they should still matter just because they're living. So even if a blade of grass can't feel pain, can't feel anything at all because they're living, because it's part of a living thing, they should matter morally in their own right, too.
And then the next level beyond that is what you mentioned, ecocentrism or holism, where you go even beyond living things and say, actually the entire ecosystem should matter as a concept in itself, but also the rocks, rivers and trees and, you know, the other things within that, again, even if they're not alive. So that's the sort of next level.
So you can imagine these concentric circles going out. So the idea of moral circles is it's a circle, and as it gets bigger, everybody else is included. So the idea would be there. If you're a biocentrist or an ecocentrist, you must, by definition, already care deeply about all of the sentient beings, because those sentient beings are part of the ecosystem and they are living. So it's a circle that goes out with no gaps. The idea of a moral doughnut, which is my expression of the frustration I talked about before, is that many people, and maybe this is where default, certainly Western humans are today, which is we start out from this anthropocentric view, you know, humans matter.
And then we jump out to say, oh, now we really care about the environment. So now I'm an ecocentrist, because rocks, rivers, trees and ecosystems really matter, not just instrumentally, but in their own right. But in doing so, there's a strip missing, hence becoming a sort of inverse of a doughnut, where the non human sentient beings are still excluded from our moral consideration and are there to be brutally exploited, consumed and destroyed as products and commodities, which actually isn't a side effect, as we talked about before, also damages the ecosystem as well. So this idea of a doughnut is my frustration about, look, the circle is supposed to expand and include everyone, all the humans and all of the sentient beings.
So you've made a terrible error if you've jumped from caring about humans in the middle of your doughnut to caring about the ecosystem outside. But now you're still disregarding the sentient beings. Again, there's no coherent, consistent way, logical way of justifying that. And that's partly why this default ecocentrism that many people claim to have today, when you scratch the surface, often it is really a veneer on a good old fashioned human centricity or even a human supremacy. So I want to have a genuine expansion of our moral scope. So I don't mind if you go beyond sentient beings to have a genuine direct compassion for, you know, living things or ecosystems.
But we must include all of the sentient beings. And I think even if we go beyond sentient beings, we still have to recognize that there is a radical moral difference between cutting a fish and a pig or cutting a branch off a tree or a blade of grass, because two of those entities experience pain and suffering and have their lives cut short as a result. And the others, I don't think, experience anything at all. So I think we still, even if you have a really more expansive moral consideration than mine, you go way beyond sentient beings. Sentience should still have that deep moral significance that puts sentient beings in a, in a special place because they can feel.
Joel
Yeah, there's another. I don't know if this is quite analogous to this, but it, it's something that's affecting me directly in attempting to be a vegan like, or, you know, attempting to be, to live a sentientist life. I never actually thought of it as Sentientism, but in a way, I guess that's what I've always thought in some ways. But I'm not a vegan, I'm a vegetarian and I've been a little bit sloppy with it occasionally. Haven't eaten red meat for many, many decades or poultry. And that's mostly because those are the animals I saw in the fields, you know, back then.
Yeah, I never had, I never saw the life of fish or seafood, sea animals directly. I wasn't, I never got into scuba diving. Maybe that would have made a difference. But, you know, so I've, I'VE continued to eat seafood and, you know, of course I'm not going to eat dolphin because those, they're clearly sentient or whale, you know, but, you know, I've been able to compartmentalize and say, you know, I don't want to, that the little bitty things are probably not sentient enough for me to worry about. I've been able to deal with it that way.
And now, you know, all the stuff you're saying and a lot of the things I'm seeing on your podcast and the implications of this really make me question that and make me really think about whether I should go further. Right. But there are problems going further in that I live in a society and a family that, where that's problematic. You know, I don't do the cooking in my house. You know, my wife is a fantastic cook who just fortunately happens to be vegetarian. And so, you know, I don't have to. I can eat fabulous vegetarian meals all the time.
But she likes fish and I like fish and so she cooks fish. And we also think like, you know, we need some protein, we need the protein from it. And so I haven't really questioned that or, you know, and if I was to take this grand step and say, from this moment I'm a vegan, I've created some, some a problem in my direct life of working with people I love and being with people I love. Yeah. And, you know, I'm harming them in a sense in a way that they didn't ask for to be harmed in this way.
So it's problematic. And this seems like it's another sort of a moral doughnut in that I can have this grand sense of Sentientism or even ecocentrism and believe in this, you know what you're saying? Fervently. But I can't live it practically in a complete sense because of these complications that are not that easy to resolve. So that's, I think a lot of people are going to be in that situation because we have such a animal exploitive society culture. Yeah.
Also, like, we love travel, you know, going to other, other countries where like the, it's already a pain to go to a country where they were some of the main ways they express themselves are through their culinary, their exploitation of animals in their unique way. And we don't take part in that. So we're searching for this relatively bland alternative to what's special about that area. So it's difficult. I don't think we can get past that. It's going to be really difficult for people to change without society changing in a large way. I don't know if that takes AI making the ultimatum for us or something lesser than that, but I would hope that there are ways we can move towards a more sentientist worldview.
Jamie
Yeah, and it's important not to be naive about this because just because we have a worldview in theory and we have a philosophical stance doesn't mean it's necessarily going to be easy to put that into practice. And in some circumstances there may be no perfection that it's possible to achieve. I'm not sure it is possible to lead a life in the modern world without causing any form of harm or exploitation to other humans or to non humans. So it's less about trying to set up some perfect purity standard. And even, you know, veganism, isn't that a perfect pure standard either.
You can be as strict to vegan as you like and you will still be causing some harm and death to non human animals in some way, shape or form unwittingly, even if it's through the production of some of the plant foods that you consume. So, you know, I wouldn't lay out some sort of ultimate perfect pure standard and expect people to be able to get there regardless of practicalities or social constraints or things that go on with family and friends. But at the same time it's really important not to let that view lead us to give up completely, you know, and the sort of not let the perfect be the enemy of the good thing where we say, well we can't be perfect so let's just not try. Because I think the motivating force still needs to be a deep moral one, just as it should be within intra-human ethics.
You know, in intra-human ethics no one leads a perfect life either. But I do think we tried to make a genuine attempt to imagine what it would be like to be other people, really seriously value their perspective and use that as a sort of magnetic pull towards us, trying to do better. And I think we need to try and do the same in a sentientist context as well. And you know, and sometimes we will need science's help there. Right, because there are some types of no- human being, fishes, shrimps, insects, for example, where there may actually be really good scientific evidence for their sentience.
But because of how we've evolved, we just emotionally find it a little bit harder to engage. Right. You look into the eyes of a shrimp and do you feel the same thing as when you look into the eyes of a puppy? But again, there, if we have an intellectual commitment to being guided by evidence and reason, you know, we've got to take that seriously and take that moral imperative as a serious pull. I think the other thing I would say is that again, we can't pretend that we don't live in family contexts and social contexts, and there's, you know, are different contexts around the world that might make these sorts of transitions challenging.
But over time, as we improve those systems, whether it be new alternatives, whether it be wider availability of plant based products, whether it be cultural shifts as we normalize ideas like veganism and Sentientism, so that the social challenge becomes less, as the, maybe the influence of the industries shifts and the lobbying becomes less and the subsidies get dropped, as investigators and others show the brutality of these industries, fishing and agriculture and others, and we're sort of just forced to confront the reality of what it would be like to be a victim of those processes. And as sanctuaries like Jonina's show us that positive vision of, you know, what their animal, these animal lives could be like, like living long, happy lives with their families and friends. Right. I think all of those different shifts in our culture over time will just make it easier and easier for each of us as individuals to get closer to that vision of living in a more compassionate way.
So I guess that's my appeal is, you know, let's not pretend we can ever be perfect, but let's not let that fact reduce our motivation to be led by the evidence and led by a genuine compassion, that genuine sense of what it's like to be these other beings, to try and make things better. And that means, yes, doing what we can individually. And we shouldn't back away from that obligation because I do think veganism is a really serious, important step. It's not about perfection, but it's a, it's an important holistic commitment we can make. But we also need to advocate and work on the systems directly because that will have, you know, one, a systemic impact, but it will also make it so much easier for individuals.
And I think one of the things that gives me hope is that as that systemic and cultural shift and social shift happens, what it will do is it will free us up as individual human beings to be the compassionate people we already want to be. Right. Anyone who walks through a slaughterhouse or sees what happens to male chicks, or sees how long it takes fish to die on the deck of a trawler, or sees what happens during the, you know, unanimous castration of pigs, or, you know, and I could go on, right? The artificial insemination process, anyone who actually sees that stuff, and you just ask them independently of anything else, is this a moral good or a moral bad?
Almost everyone would come up with the same answer, right? So in a way, none of us really want to be doing this. So if we can shift the systems and the cultures in our society such that we don't need to anymore and it's just not normal to do it anymore, it will almost free us up. It's like a joyous freeing up to be the more compassionate people I think we all really want to be anyway, right. And that's been my experience of it. Some people from the outside look at veganism as it's a sacrifice, it's a constraint, it's difficult.
You have social approbation and people laughing at you and bitching about you on Twitter and blah, blah, blah. It's a tough challenge. It's a sacrifice I need to take on. And many vegans play into that because we pretend we're then heroes, right, for boldly doing what no one else is brave enough to do. We sort of revel in the fact there's only 1 or 2% of us around the world, right?
Joel
Yeah.
Jamie
And I like, for me, it's a joy, it's a freeing, it's an opportunity, it's an exploration, it's a coherence, you know, it's a closer coherence between the choices I make and the person I want to be. It's. Yeah, it's a wonderful thing. So. So there's a positive pull there.
Joel
One more thing before we wrap up. I just want to introduce us, like maybe a little insight from me, from my musician side. I play bass and guitar, played in bands and things. And so I kind of tend to think in terms of harmony and rhythm a lot. And where I see, when I look at. Well, an example is I just saw an infographic, beautifully done on Blue sky, actually, I'll put it in the show notes, but it showed a circle with the biomass comparisons of all the creatures. You've probably seen these kind of things. And the very top, the almost the entire top half of it was humans, actual human beings. The whole, almost the entire bottom half was animals that we eat, you know, and most of that was beef, was cows. And then on the sides in these little tiny slivers and these little tiny chunks where it's like all the wild terrestrial animals and all the wild marine animals, and they're just a very insignificant little bit of what the total biomass on the planet is anymore.
Jamie
Yeah.
Joel
And I see two things when I look at that. First, it's just revolting in that it's like it just seems so out of balance and disharmonious. It seems like we're not singing beautiful music on this planet. This is wrong. There's a genuine wrongness that just leaps out at you when you see that. The second thing I see though is especially when I look at that huge circle of cattle, that's beef, that's a huge opportunity if you could reduce that substantially to reassert some balance and some beauty to some harmony to the way our ecosystems work, how much better our planet could be, how much healthier it could be.
Jamie
I agree. It links back to the land use thing I was talking about before. Almost the entire reason we've driven the deforestation we've done is for animal agriculture. It's basically, you know, a voracious hunger for land that has destroyed vast swathes of the natural world. And interestingly, even within intra-human ethics, it's been one of the major motivators for colonialism historically.
Joel
Right.
Jamie
That's part of the reason why so many of these nation states were desperate for more land is because they need the space. Because animal agriculture is extremely hungry for space. So both in terms of intra-human ethics and our often very painful colonial history, but in terms of that just sense of imbalance with the natural world and the deforestation that's driven animal agriculture sits at the core of those. But I guess the third thing to add on to it is, is this ethical challenge because the biomass picture measures non-human animals in farms and fisheries in tons, it's mass.
But each of those tons, if you're talking cows, it's like maybe two individual sentient beings. If you're talking fishes or shrimp, it can be thousands or millions of individual sentient beings, each of which has an experience and has aspirations and wants to live life. And the ones in that bucket in the, you know, farming and fisheries and exploitation space are essentially being exploited and tortured and killed. Right. And not only we are doing that to the all of the hundreds of millions, even when you include aquatic animals, a couple of trillion sentient animals every year within forcibly rebreeding them so we can do it over again.
It's like, it's like extinction where you redo it every year. Right. Or even every three or four months, depending on the species. So I would just, I completely echo what you said about this sort of sense of imbalance and lack of rhythm and what, you know, what have we done to the natural world because of this drive to exploit non human animals? But then there's this additional ethical impulse of recognizing that, you know, we can't weigh them in tons, we have to weigh them in terms of individual sentient lives. And you know, then the ethical horror really reveals itself to you.
Joel
Yeah, I haven't seen those infographics. I'm sure they're available somewhere.
Jamie
Yeah. And the numbers are just mind blowing. They're absolutely mind blowing. You just, just look up the numbers of farmed shrimp per year and it will blow your mind. And then you know how they're treated in those processes. Eye stalk ablation is another interesting one to look up that will horrify you.
Joel
Oh yeah, okay.
Jamie
In short, I mean we could, I don't want to drag on too much, but it's an interesting example. So the eye stalks of female shrimp are cut because it increases their reproductive rate.
Joel
They're purposely blinded?
Jamie
Yeah. Their eye stocks are cut. Yeah.
Joel
Nice.
Jamie
Yeah. Anyway, there's, you know this already, but there are thousands and thousands of stories like this. You know, there's a different variety of that story for every species. I'm getting into a vegan rant now, but you know, apologies.
Joel
Yeah, yeah. Okay, well, I think we should wrap up on that.
Jamie
There's one and there's one. There's one quick thing I wanted to say just to sort of loop back to the beginning, I think because Asimov at one point was actually honorary president of the American Humanist Association.
Joel
He was indeed. He was indeed.
Jamie
And I and a friend of mine, Amy Halpin actually presented at the American Humanist association conference a few years ago about Sentientism. So in a way that's partly my hope that Humanism itself will just over time become more and more sentientist. And I, you know, I'm already pretty confident that people like Carl Sagan, if he'd lived a bit longer, would have ended up as a full on sentientist. Because in his thinking, his writing, he was going in that direction as well. But maybe Asimov might have got there too with a few more.
Joel
He might have, with how he embraced the Gaia hypothesis. It's quite possible.
Jamie
So there's hope for hope for the Humanism movement. I mean the Humanism movement is rightly proud of, you know, how it's really led positive social change so many times. Right. It's been on the vanguard of trying to make things better.
Joel
Maybe they can rebrand?
Jamie
This is the next challenge. And I don't mind if they don't rebrand. Right. You can keep calling it Humanism, but as long as you recognize that us humans should care about all sentient beings, then I'm absolutely fine. You don't even need to, you don't even need to change from Humanism. You don't even need to change the name.
Joel
Yeah, yeah, No, I love how you described Humanism as not just that it's about humans, but about how human, what humans can think, how humans can be agents.
Jamie
And I just want more humanists to realize that. Right. So that's, that's part of my campaign. Right. You don't have to drop Humanism. You just have to, you know, really expand it and make it coherent and consistent. Well founded. And it's supposed to be about a rationally grounded ethics. Right.
Joel
Before I let you go, can you tell our listeners where they can find you online and what kind of things you're working on and what.
Jamie
Of course. So we're pretty much everywhere. If you search for the word Sentientism, you'll basically find our stuff. So Sentientism.info is our core website. You'll find all of the links there. We run a YouTube and a podcast. We have social media accounts on Bluesky, Twitter and everywhere else. But we also run some really friendly global community forums. The biggest is on Facebook, but we're on Discord and Telegram and Signal and WhatsApp and everywhere else as well. And there are people from over 100 different countries involved in those communities, and they're open to everyone who's interested in these sorts of ideas, not just sentientists.
So we have sort of plenty of conversations about, you know, the disagreements with Sentientism too. So people are welcome to come and join in those things. And we're also running a bunch of different projects where we're thinking about the implications of Sentientism. So as I mentioned, sentientist politics and democracy. One other example is sentientist education. So we're saying, well, if you had a sentientist mindset, what would this mean for our education system? And interestingly, in the UK, some teachers are now starting to bring Sentientism as a worldview into the classrooms that they teach alongside religions and alongside Humanism too.
So that's just a sense of what we're up to. But yeah, search for Sentientism and you'll find us, and my handles on all of the social media personally are Jamie Woodhouse, but I also double up with Sentientism pretty much everywhere too. But I'd love to continue the conversation. It's been brilliant to get the chance to talk to you, Joel.
Joel
All right, it's been wonderful. Thank you.
I hope you can understand now why I felt the need to get Jamie on the podcast, and the subject material might give you a hint of where my head has been moving in the long break. I think it ties in well with the idea of what kind of crises we modern Hari Seldens have to deal with. My emphasis for most of the last year has been on the climate crisis as a volunteer with Citizens' Climate Lobby, a group that does what it says lobbies government representatives on the need for critical climate change policy. Our local chapter takes a trip to Washington, D.C. every summer for in in person sessions with congressional reps to advocate for things like carbon fees, pro electrification policies, and a special interest of late, the transition to cleaner forms of concrete, a particularly huge source of carbon dioxide emissions in its production. I hope to join the trip next summer.
Jamie and his Sentientism podcast have opened my eyes to the huge contribution of animal agriculture driving the climate crisis crisis. This particular aspect of the problem is rarely discussed in our climate group, and the reasons seem obvious. Everyone wants a cleaner and healthier planet, but fewer of us are willing to radically change our diet to help us get there. As alluded to near the end of our conversation, I have my own deep challenges going completely vegan, and I can relate to the difficulties people have considering it who may not be aware of what a huge problem the factory farming industry is for the planet, let alone the cruelty, exploitation and suffering that comes along with it.
This podcast episode is partly a way for me to begin introducing the topic to people who really should know more about it. I find Jamie to be an excellent spokesperson for the cause because he doesn't come across as smug and condescending, and he clearly has the deep knowledge to make his case. If you're interested interested in learning more about this topic, the website Sentientism.info is a great place to start and you'll find links to the podcast and YouTube channel there, along with Jamie's social media links which I'll be including in the show notes.
As usual, I'd like to thank my musical collaborator Thomas Barnes for the theme update, Jeremy MacKinnon for sound editing, Mike Topping for the everlasting podcast art, and all of my listeners for your patience and for letting me figure out just what I want to do with this thing going forward. You can reach me on Bluesky and Mastodon at Seldon Crisis or via email at joel@seldoncrisis.net. I'd love to hear your feedback about this or any other episodes.
Stay tuned, hopefully within weeks or months rather than years for another episode of Seldon Crisis.
[Theme music plays]