About Us

We provide academic leadership for research and policy related to nonhuman consciousness, sentience, agency, moral status, legal status, and political status—with special focus on animals and AI.

Our Approach

We advance understanding of the nature and intrinsic value of nonhuman minds in three key ways:

  1. Research: We conduct and support foundational research about the nature and value of nonhuman minds.
  2. Outreach: We engage with decision-makers through direct consultation and public communication.
  3. Field-building: We engage with other researchers through events, awards, and sponsored projects.
Learn more

Featured Research

Everything and Nothing Is Conscious: Default Assumptions in Science and Ethics

Jeff Sebo

Frontiers in Psychology (2025)

    Experts have often assumed animals lack consciousness until proven otherwise, but some now suggest changing this presumption. Options include assuming consciousness in all animals, all living beings, all with neurons, all with complex cognition, or even all beings. I assess these options scientifically and ethically, arguing that different defaults make sense in different contexts. For example, a broad assumption of consciousness may be better for ethical theory and scientific practice, since it supports precaution and innovation. However, a narrower assumption may be better for scientific theory and ethical practice, since it works with existing evidence and institutions. By adopting multiple context-specific defaults, we can better serve both science and ethics.

    Read More
    Read More

    Insects, AI Systems, and the Future of Legal Personhood

    Jeff Sebo

    Animal Law Review (2025)

      This paper makes a case for insect and AI legal personhood. Humans share the world not only with large animals like chimpanzees and elephants but also with small animals like ants and bees. In the future, we might also share the world with sentient or otherwise morally significant AI systems. These realities raise questions about what kind of legal status insects, AI systems, and other nonhumans should have in the future. At present, debates about legal personhood mostly exclude these kinds of individuals. However, this paper argues that our current framework for assessing legal personhood, coupled with our current framework for assessing risk, imply that we should treat these kinds of individuals as legal persons. It also argues that we have reason to accept this conclusion rather than alter these frameworks.

      Read More
      Read More

      What Will Society Think about AI Consciousness? Lessons from the Animal Case

      Lucius Caviola, Jeff Sebo, and Jonathan Birch

      Trends in Cognitive Sciences (2025)

        We examine how society might respond to the possibility of AI consciousness by drawing parallels with human attitudes toward animal consciousness. Our analysis reveals that perceptions of AI consciousness will likely be influenced by appearance and behavior, social and economic roles, and moral biases. However, AI systems may benefit from their advanced cognitive capacities while facing challenges due to their non-biological origins. We argue that attitudes toward AI consciousness remain malleable, making this a critical moment for research and policy development. We call for urgent interdisciplinary research on the science of AI consciousness, public attitudes about this issue, and ethical frameworks for navigating potential societal disagreement and ensuring thoughtful preparation.

        Read More
        Read More

        Is There a Tension between AI Safety and AI Welfare?

        Robert Long, Jeff Sebo, and Toni Sims

        Philosophical Studies (2025)

          The field of AI safety considers whether and how AI development can be safe and beneficial for humans and other animals, and the field of AI welfare considers whether and how it can be safe and beneficial for AI systems. There is a prima facie tension between these projects, since some measures in AI safety, if deployed against humans and other animals, would raise questions about the ethics of constraint, deception, surveillance, alteration, suffering, death, disenfranchisement, and more. Is there in fact a tension between these projects? It depends in part on what potentially conscious, robustly agentic, or otherwise morally significant AI systems might need and what we might owe them. This paper argues that, all things considered, there is indeed a moderately strong tension—and it deserves more examination.

          Read More
          Read More

          Overlapping Minds and the Hedonic Calculus

          Luke Roelofs and Jeff Sebo

          Philosophical Studies (2024)

            How should we update our moral thinking if it turns out to be possible for a single token mental state — a feeling of pleasure, pain, satisfaction, frustration, or another welfare state — to belong to two or more subjects at once? Some philosophers think that such sharing of mental states might already occur, whereas others foresee it as a potential consequence of advances in neurotechnology and AI. Yet different types of case generate opposite intuitions: if two mostly-distinct people share a few mental states, it seems we should count the value of those states twice, but if two physically-distinct beings share their whole mental lives, it seems we should count the value of that life once. This paper suggests that these intuitions can be reconciled if the mental states that matter for welfare have a holistic character.

            Read More
            Read More

            Moral Consideration for AI Systems by 2030

            Jeff Sebo and Robert Long

            AI and Ethics (2023)

              This paper makes a case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans morally ought to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being sentient or otherwise morally significant. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being sentient or otherwise morally significant by 2030. The upshot is that humans have a moral duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing to discharge that duty now, so that we can be ready to treat AI systems with respect and compassion when the time comes.

              Read More
              Read More

              Featured Events

              A Bill of Rights for Animals

              Cass Sunstein

              Rosenthal Pavilion | Kimmel Center, 10th Floor | 60 Washington Square South

              September 17, 2025 | 4:00 pm – 5:15 pm ET
              Learn more

              Are Large Language Models Sentient?

              David Chalmers

              October 1, 2022
              Learn more

              Animals and the Constitution

              John Adenitire and Raffael Fasel

              December 3, 2025 | 12:00 pm – 1:15 pm ET
              Learn more

              Evaluating AI Welfare and Moral Status: Findings from the Claude 4 Model Welfare Assessments

              Robert Long, Rosie Campbell, and Kyle Fish

              July 25, 2025 | 12:00 pm – 1:15 pm ET
              Learn more

              Could an AI System Be a Moral Patient? Conceptual Foundations for AI Welfare

              Winnie Street and Geoff Keeling

              August 20, 2025 | 12:00 pm – 1:15 pm ET
              Learn more

              The Edge of Sentience Book Launch and Panel Discussion

              Jonathan Birch, L Syd M Johnson, John Olusegun Adenitire, and Claudia Passos Ferreira

              November 1, 2024
              Learn more

              Featured Media

              Elephants Have Feelings and Should Have Rights

              Jeff Sebo (2023)

              What Should We Do if a Chatbot Has Thoughts and Feelings?

              Jeff Sebo (2022)

              Debate: To Shrimp or Not to Shrimp

              Jeff Sebo, Lyman Stone, Peter Singer (2025)

              What Do We Owe AI?

              Jeff Sebo (2025)

              Can Machines Suffer?

              Article about AI consciousness that discusses The Moral Circle (2025)

               

              Can AIs Suffer? Big Tech and Users Grapple with One of Most Unsettling Questions of Our Times

              Article about AI welfare that cites three of our publications, references one of our events, and includes a quote from our director (2025)

              The Secret to Studying Animal Consciousness May Be Joy

              Interview with Jeff Sebo about the New York Declaration on Animal Consciousness (2025)

              Plans Must Be Made for the Welfare of Sentient AI, Animal Consciousness Researchers Argue

              Coverage of “Taking AI Welfare Seriously” that cites the New York Declaration on Animal Consciousness (2024)

              What Should We Do If AI Becomes Conscious? These Scientists Say It’s Time for a Plan

              Coverage of “Taking AI Welfare Seriously”
              (2024)

              If Robots Have Feelings, Do They Need Rights?

              Article that cites “Taking AI Welfare Seriously” (2024)

              Insects and Other Animals Have Consciousness, Experts Declare

              Coverage of the New York Declaration on Animal Consciousness (2024)

              Scientists Push New Paradigm of Animal Consciousness, Saying Even Insects May Be Sentient

              Coverage of the New York Declaration on Animal Consciousness (2024)

              Featured Opportunities

              Administrative Aide II – Environmental Studies, Research Centers

              New York, NY Spring 2026
              $39.129 per hour Full Time

              Arts & Science is seeking a talented Administrative Aide II to join the Department of Environmental Studies to support the Center for Mind, Ethics, and Policy (CMEP), the Center for Environmental and Animal Protection (CEAP), the Wild Animal Welfare Program (WAWP), and the Food Impact Program (FIP). This individual will perform a wide range of clerical, secretarial and general office duties including those of a confidential nature. Prioritize office activities and delegate work to student and/or casual employees. Serve as a source of information to faculty, researchers, students, contractors, and other stakeholders on policies, procedures and office activities. Interact with the general public as NYU liaison and with University personnel including those at the senior level regarding general inquiries as well as specific issues and problems. Customize and/or compose letters in response to requests for information. Perform general word processing duties utilizing intermediate to advanced-level functions. Modify and/or create databases and complex spreadsheets. Monitor complex department budgets and/or grants.

              If you have any questions, feel free to contact Audrey Becker at audrey.lynn.becker@nyu.edu.

              Researcher (Full Time)

              New York, NY Spring 2026
              $60,000 to $70,000 Full Time

              The NYU Center for Mind, Ethics, and Policy (CMEP) is hiring a full-time researcher to operate as a project manager. The Researcher will conduct research and manage projects related to animal and AI welfare while engaging stakeholders within and beyond academia. Successful candidates will have a background in research, project management, or both.

              Assistant Research Scholar (Part Time)

              New York, NY Spring 2026
              $30-$50 per hour Part Time

              The NYU Center for Mind, Ethics, and Policy (CMEP) is currently seeking a part-time research assistant (RA) to assist with a number of ongoing projects. The Assistant Research Scholar will assist with multidisciplinary research projects through research, writing, editing, formatting, references, and more. Successful candidates will be curious, collaborative, detail-oriented, and comfortable working across research and policy contexts.

              Call for Expressions of Interest: 2026 Mind, Ethics, and Policy Summit

              Friday April 10 – Saturday April 11, 2026

              Due to strong interest, this event has reached capacity and we are no longer accepting applications.

              The NYU Center for Mind, Ethics, and Policy is hosting a two-day summit on April 10-11, 2026Discussion topics will center on the consciousness, sentience, agency, moral status, legal status, and political status of nonhumans, with special focus on invertebrates and AI systems. The aim of this event is to connect researchers and other experts with an interest in these issues across a variety of topics, fields, and career stages.

              The summit will include lightning talks, group discussions, breakout sessions, and plenty of open space for networking and relaxing. Both days will also include vegan breakfast and lunch, along with a reception. The summit will be supplemented by a public event on the evening of Friday, April 10, 2026.

              We welcome expressions of interest from researchers and other experts. Please note that limited travel support is available for some early-career scholars, that is, scholars within five years of their terminal degree.

              If you have interest in attending this summit, please send the below materials to Audrey Becker at audrey.lynn.becker@nyu.edu. We will give full consideration to all applications received by January 26, 2026 and consider subsequent submissions on a rolling basis.

              Please include in your email:

              Please note that if you answer these optional questions, your answers can range from general (e.g., “Frameworks for collecting public input about insect welfare”) to specific (e.g., “Organizing a citizens’ assembly to collect public input about insect welfare.”)

              Additional notes:

              Topics that we see as within scope for this summit include but are not limited to:

              If you are interested in these or related topics, we would love to hear from you! If you have any questions, feel free to contact Audrey Becker at audrey.lynn.becker@nyu.edu.

              Contact our team

              You can get in touch with our team via the form on our contact page.

              Contact