Research Agenda
The NYU Center for Mind, Ethics, and Policy examines the nature and intrinsic value of nonhuman minds, with special focus on invertebrates and AI systems. Which nonhumans are conscious, sentient, and agentic? What kind of moral, legal, and political status should they have? How should we make decisions that affect them in circumstances involving disagreement and uncertainty? Our research agenda focuses on the following general themes, all of which are important, difficult, and contested—calling for considerable caution and humility.
Status
Sample work
- Evaluating Animal Consciousness
- Moral Consideration for AI Systems by 2030
- Animals, Plants, Fungi, and Representing Nature
Which nonhumans matter for their own sakes?
Ethically, what is the basis of moral standing, that is, of morally mattering for your own sake? Some experts think that sentience (roughly, the capacity to consciously experience positive and negative states like pleasure and pain) is required. Others think that consciousness (roughly, the capacity to have subjective experience) is enough. Others think that robust agency (roughly, the ability to set and pursue your own goals in a self-directed manner) is enough. There are other views as well.
Scientifically, which nonhumans have these features? Take consciousness. Some experts think that a cognitive system with the same structures, functions, and materials as mammalian brains is required. Others think that a cognitive system with broadly analogous capacities for perception, attention, learning, memory, self-awareness, and so on is enough. Others think that a cognitive system that can process information or represent objects is enough. There are other views as well.
Practically, how should we make decisions that affect nonhumans given disagreement and uncertainty about whether they matter? What are the risks associated with false positives and false negatives about moral standing? When, if ever, is the probability or severity of false positives worse, and when, if ever, is the probability or severity of false negatives worse? How can we work together to create a conception of the moral circle that mitigates both sets of risks at once?
How much do particular nonhumans matter for their own sakes?
Ethically, what determines how much intrinsic moral value an individual or population possesses? Regarding individuals, if one being has a greater capacity for happiness, suffering, and other such welfare states than another, does the former being “carry more weight” than the latter, all else being equal? Regarding populations, if one population has a greater capacity for welfare in the aggregate than another, does the former population carry more weight than the latter, all else being equal?
Scientifically, how much happiness, suffering, and other such states can particular nonhumans experience? Does the capacity for welfare depend on cognitive complexity, longevity, and other such features? For instance, do elephants have greater capacities for welfare than ants, assuming that they both have the capacity for welfare? In the future, will robotic elephants have greater capacities for welfare than robotic ants, assuming that they both have the capacity for welfare?
Practically, how should we make decisions that affect large and diverse populations in circumstances involving disagreement and uncertainty about how much everyone matters? To what extent can we improve our ability to make interpersonal, interspecies, and intersubstrate welfare comparisons, and to what extent can we develop tools for making decisions affecting members of different species and beings of different substrates without such comparisons?
Ethics
Sample work
- Beyond Compare? Welfare Comparisons and MCDA
- Kantianism for Humans, Utilitarianism for Nonhumans?
- Is There a Tension between AI Safety and AI Welfare?
What do we owe particular nonhumans?
Ethically, how should humans interact with particular nonhumans? To what extent is ethics a matter of promoting welfare, respecting rights, cultivating virtuous characters, and cultivating caring relationships? Should we help others or merely avoid harming them? Should we extend equal consideration to everyone independently of their proximity to us, or should we extend greater consideration to some than to others based on relational factors?
Scientifically, how do our actions and policies affect particular nonhumans? We now live in an epoch where human activity is a dominant influence on the planet. How do agriculture, development, and other such practices affect animals directly and indirectly, and what do we owe animals in light of those impacts? In the future, how will AI development and deployment affect AI systems directly and indirectly, and what, if anything, will we owe AI systems in light of those impacts?
Practically, what kinds of decision procedures can we develop to treat nonhumans well? For example, how does the practice of killing farmed and wild animals shape our beliefs, values, and characters, and how should that factor into assessments of these practices? How does the practice of developing, deploying, and instrumentalizing human-like AI systems shape our beliefs, values, and characters, and how should that factor into assessments of these practices?
What follows for particular practices and institutions?
In the public sector, should particular nonhumans be classified as legal subjects, with legal rights? If so, should that take the form of legal personhood or a new, related kind of status? Also, should particular nonhumans be classified as political subjects, with political rights, in particular communities? If so, should that take the form of political citizenship or a new, related kind of status? Either way, what follows for everything from the right to bodily liberty to the right to political representation?
In the private sector, what kinds of frameworks should shape our interactions with particular nonhumans? Should universities adopt ethical oversight frameworks for invertebrate research, and if so, what form should these frameworks take? Should AI companies adopt ethical oversight frameworks for AI development, and if so, what form should these frameworks take? In all cases, what role should expert input, public input, external evaluators, and government regulators play?
More generally, what kind of society should we seek to build in the future, and how can we combine radical long-term goals with moderate short-term steps? Should we seek to build an animal-free food system, and if so, how can we do so? Should we seek to build wildlife-inclusive infrastructure, and if so, how can we do so? Should we seek to build AI systems who prefer to cooperate with humans (rather than merely seek to control AI systems), and if so, how can we do so?
You can find other work related to these themes on the research page, and you can find integrated discussion of these themes in The Moral Circle. You can also find much of our practical work related to food systems, infrastructure, and other such topics at the websites at the Center for Environmental and Animal Protection, the Food Impact Program, and the Wild Animal Welfare Program. If you have comments or suggestions about our research, feel free to contact us here.

