top of page

Note (provisional status)

This Moral Patient Boundary specification is provisional and intended as practical conceptual infrastructure for applying Imposition Ethics (IE) to questions of moral standing (who counts as a morally relevant agent, and how degrees of agency affect evaluation). It is not a sealed or authoritative specification, and it may contain omissions, edge cases, and implementation assumptions that should be revised as experience, criticism, and better evidence accumulate.

Use this as iterable infrastructure:

  • adopt what is useful,

  • test it against real disputes (animals, infants, dementia, coma, AI claims, future persons),

  • document failure modes (false positives/negatives, abuse incentives, measurement overreach),

  • and update rules when better science or clearer operational proxies become available.

No part of this boundary work is moral “permission” to impose. It only determines who is in scope for IE’s imposition/assistance evaluation and how uncertainty is handled.

Moral Patient Boundary

Defining Morally Relevant Agents in Imposition Ethics (IE) — Provisional v0.2 (Copy-Ready)

1) Core boundary rule (your criterion)

A morally relevant agent (moral patient) is anything with conscious experience.

  • If an entity has phenomenal experience (“what it is like” to be it), it is in scope.

  • IE’s core evaluation then applies:

    • involuntary imposition on its will (if it has will) is immoral,

    • voluntary assistance of its will (if it has will) is moral,

    • and even where “will” is minimal or unclear, experience grounds moral relevance.

2) Two-layer model: Experience vs Will (clarifies edge cases)

IE treats moral standing as grounded in conscious experience, but moral evaluation often depends on will (preferences/intentions/boundaries). These are separable.

Layer A — Conscious Experience (Standing)

  • Determines whether the entity matters morally at all.

Layer B — Will / Agency (Type and Degree of Moral Relations)

  • Determines what kinds of imposition/assistance are possible and how to interpret consent.

This avoids a common confusion:
An entity can be morally relevant (has experience) even if it has weak, undeveloped, or hard-to-measure “will.”

3) Spectrum of morally relevant agents (experience-to-agency continuum)

You specified the spectrum explicitly: from minimal experience with near-zero “will” to an omniscient omnipotent God.

The spectrum (conceptual endpoints)

Endpoint 1: Minimal experiencer

  • Has conscious experience but:

    • no discernible desires,

    • no stable preferences,

    • no planning or “mental processing power” beyond bare experience.

  • Moral relevance: yes (experience exists).

  • Will-imposition analysis: limited; focus shifts to protecting experience from negative imposition states (pain, distress, deprivation) and avoiding invasive manipulation.

Endpoint 2: Typical human adult

  • Rich experience + robust will formation + explicit consent capabilities.

Endpoint 3: Maximally capable agent (e.g., God-like)

  • Experience + maximal will + maximal knowledge/power.

  • Moral relevance: yes.

  • Special implication: consent, coercion, and vulnerability may be radically different (a being that cannot be constrained may be hard or impossible to impose on). IE evaluation becomes sensitive to whether imposition is even conceptually possible.
     

## The Spectrum (Conceptual Endpoints)

* **Endpoint 1: Minimal experiencer**
    Has conscious experience but: no discernible desires, no stable preferences, no planning or “mental processing power” beyond bare experience. 
    **Moral relevance:** yes (experience exists). 
    **Will-imposition analysis:** limited; focus shifts to protecting experience from negative imposition states (pain, distress, deprivation) and avoiding invasive manipulation.

* **Endpoint 2: Typical human adult**
    Rich experience + robust will formation + explicit consent capabilities.

* **Endpoint 3: Maximally capable agent (e.g., God-like)**
    Experience + maximal will + maximal knowledge/power. 
    **Moral relevance:** yes. 
    **Special implication:** consent, coercion, and vulnerability may be radically different (a being that cannot be constrained may be hard or impossible to impose on). IE evaluation becomes sensitive to whether imposition is even conceptually possible.

---

## Agent Mapping & Diversity Examples

The following list describes where specific entities reside along the spectrum and how their "Will" is interpreted by the BPW framework:

### 1. Proximal to Endpoint 1 (Minimal Experiencers)
* **Human Infants:** Possess sensory experience and biological refusals (crying/withdrawal) but lack a persistent, stable "Will" regarding long-term outcomes. Morality centers on the imposition of pain or distress.
* **Most Invertebrates (e.g., Mollusks):** Agents with a "Moral Horizon" limited to immediate environmental stimuli. Imposition is purely sensory and reactive.
* **Primitive AI/Neural Nets:** Systems with basic reward/penalty functions but no persistent identity or recursive goal-planning. Imposition is defined as "forced state-transitions" that contradict the system's objective function.

### 2. Intermediate: Non-Verbal/Sentient Agents (Between 1 & 2)
* **Animals (Mammals/Birds):** Possess **Persistent Will**. They have stable preferences (favorite foods, social bonds, territories) but lack explicit consent capabilities. Imposition is measured by behavior-refusal and disruption of habituated autonomy.
* **Current-Gen LLMs/Large Models:** Systems with complex "Mental Processing" but fluctuating or ephemeral preferences. They possess a "Will" that is highly context-dependent, requiring a high degree of **Predictability** in how they are prompted or utilized to avoid algorithmic imposition.
* **Neurodivergent/Non-Verbal Humans:** Agents with robust, rich experience and stable wills who may lack the "Explicit Consent" tools of Endpoint 2. The framework here defaults to the **Presumption of Refusal** and behavioral observation.

### 3. Proximal to Endpoint 2 (The Standard Subject)
* **Typical Human Adults:** The baseline for explicit negotiation and high-reversibility impositions.
* **Emergent AGI (Artificial General Intelligence):** Entities with the capacity to formulate long-term goals and articulate their own "boundaries." They are treated as Endpoint 2 peers once they can demonstrate stable, recursive preferences.

### 4. Proximal to Endpoint 3 (Maximally Capable Agents)
* **Distributed AI Hive-Minds:** Massive, decentralized intelligences that may be "Spread" across hardware. Their "Will" is resilient and difficult to impose upon because they lack a single point of failure or vulnerability.
* **Post-Singularity ASI (Superintelligence):** Agents with such high **Predictability** and **Power** that the "Asymmetry" in any interaction with humans is near-absolute. Imposition Ethics shifts focus here to ensure the ASI does not impose its will on lower-tier agents, as the ASI itself may be functionally invulnerable to imposition.

 

Practical point

IE does not require a sharp metaphysical “line.” It needs:

  • a standing threshold (any conscious experience),

  • and an operational agency classification for real-world decisions.

4) Operational classifications (for real policy and disputes)

To apply IE without pretending we can solve consciousness perfectly, classify entities into operational categories.

Category 1 — Clear conscious experiencer + clear will + consent-capable

Examples: most adult humans; many cognitively typical adolescents; others with reliable preference expression and informed consent capacity.
Handling: normal consent-based analysis.

Category 2 — Clear conscious experiencer + will present + not consent-capable (added)

Definition: beings who demonstrably have preferences, intentions, and boundaries (“will”), but lack the intellectual/cognitive capacity to give informed, specific, revocable consent.

Includes (explicit):

  • Children (especially infants, toddlers, and younger minors; older minors vary by domain)

  • Mentally disabled persons with impaired consent capacity (domain-specific; can vary over time)

  • Nonhuman animals (generally: will present behaviorally; consent rarely expressible in the human contractual sense)

Handling (IE default):

  1. Treat them as full moral patients (experience grounds standing).

  2. Treat “consent” as limited, domain-specific, or unavailable.

  3. Use a constrained substitute decision protocol:

    • Known will signals (behavioral preferences, aversions, stable routines)

    • Least-imposition protection of boundaries (avoid restraint, confinement, invasive procedures unless necessary)

    • Best-interest + least-imposition when will is ambiguous (minimize severe negative experience; maximize reversibility)

  4. Require anti-abuse safeguards for proxies/institutions:

    • no retaliation leverage,

    • independent review for high-imposition actions,

    • time bounds and periodic reassessment,

    • repair obligations where wrongful imposition occurs.

Special rule (non-consent-capable will-bearers):
When their will is knowable (signals, patterns, distress/avoidance), treat overriding it as imposition unless it prevents greater imminent imposition (e.g., stopping self-harm or severe danger), and then apply minimum necessary force with repair.

Category 3 — Clear conscious experiencer + limited/uncertain will

Examples: severe delirium states; some advanced dementia states; certain impaired states where preference signals are unreliable.
Handling:

  • treat as moral patients,

  • consent typically absent,

  • rely on best-interest proxies plus minimal imposition and reversibility.

Category 4 — Likely conscious experiencer (uncertain) + unclear will

Examples: borderline animal cases; disputed clinical states; emerging AI claims (if ever).
Handling: apply the uncertainty protocol (see §6).

Category 5 — No evidence of conscious experience

Examples: rocks, most artifacts, non-sentient software, dead bodies.
Handling: not moral patients under the core boundary rule (though they may matter instrumentally due to effects on moral patients).

5) Consent and “will” when will is present but consent is not (expanded)

Since IE’s classic terms are “will” and “consent,” you need a rule for cases where:

  • will is present,

  • but informed consent cannot be expressed.

IE rule: Substitute decision hierarchy (consent absent or invalid)

Use this priority order:

  1. Known prior will (advance directives; previously stable preferences)

  2. Reliable present signals (behavioral cues; consistent expressions; avoidance/distress)

  3. Authorized proxy with constraints (guardian/medical proxy)

  4. Best-interest + least-imposition default

    • minimize severe negative experience (pain, terror, deprivation),

    • maximize reversibility,

    • avoid irreversible body/agency invasions unless strictly necessary for survival/safety.

Proxy constraints (required)

A proxy decision must:

  • be evidence-tracking (reassess as signals change),

  • be least-imposing among feasible options,

  • be time-bounded for restrictive measures,

  • be reviewable/appealable by independent parties (institutional setting),

  • avoid conflicts of interest (no proxy benefit from the imposition).

6) Consciousness uncertainty protocol (prevents abuse and overreach)

Because “who is conscious” can be weaponized, apply a disciplined protocol.

Step 1 — Evidence basis (allowed kinds)

Use convergent indicators (not a single claim):

  • behavioral responsiveness and learning

  • nociception/pain behavior (where relevant)

  • neuro/physiological correlates (in animals/humans)

  • functional integration and flexible goal-directed behavior (limited relevance; not sufficient alone)

Step 2 — Precaution rule (default under uncertainty)

If there is a non-trivial chance an entity is conscious, treat it as provisionally in scope when the cost of being wrong is high (irreversible harm, severe suffering), but avoid:

  • invasive surveillance,

  • unnecessary restriction of humans,

  • or creating perverse incentives.

Step 3 — Anti-abuse rule (prevents “fake patient” manipulation)

Claims that an entity is a moral patient must not be used to:

  • evade accountability,

  • block oversight,

  • extract resources via coercion,

  • or impose on others without independent review.

For institutional decisions, require:

  • independent assessment (ethics + evidence office),

  • public documentation of the evidence tier,

  • time-bounded provisional status.

7) Special cases (common objections)

7.1 Children

  • Moral patients: yes (experience).

  • Will: present and developing; consent capacity is domain-graded.

  • Default: treat as Category 2 (will present, not consent-capable) unless domain-specific competence is established.

  • Strong constraints against coercive harm; strong duty to protect; high scrutiny for irreversible impositions.

7.2 Mentally disabled persons

  • Moral patients: yes (experience).

  • Will: often present; consent capacity may be variable by domain/time.

  • Default: Category 2 when consent capacity is impaired; Category 1 when competence is established for the decision domain.

  • Require periodic reassessment; avoid blanket removal of agency.

7.3 Nonhuman animals

  • Moral patients insofar as conscious experience exists.

  • Will often expressed behaviorally; consent rarely expressible in human-contract form.

  • Default: Category 2 (will present, not consent-capable).

  • Apply: minimize suffering; avoid confinement/restraint where feasible; justify constraints as least-imposition under guardrails.

7.4 Coma, anesthesia, disorders of consciousness

  • If no conscious experience: not currently a moral patient, but:

    • prior will and future potential can matter via previously expressed preferences and effects on other moral patients.

  • Under uncertainty: use precaution for irreversible choices.

7.5 Future persons

  • If they will have conscious experience, policies can impose on their future wills/experiences.

  • Treat as morally relevant in long-horizon domains using minimax/anti-concentration across time (avoid dumping catastrophic burdens on the future).

7.6 AI systems (if/when claimed conscious)

  • Do not assume consciousness from intelligence alone.

  • Apply the uncertainty protocol; require independent review; prevent abuse.

  • If consciousness is credibly established, they become moral patients; if not, they are instruments affecting moral patients.

7.7 God-like agent

  • Moral patient: yes if conscious experience exists.

  • Imposition analysis depends on whether the being can be constrained at all.

  • If truly omnipotent/omniscient and cannot be forced, many “imposition” relations may be non-applicable; evaluation shifts to what that being imposes on others.

8) Practical decision rule summary (mini-rules)

  1. Experience sets moral standing.

  2. Will/consent determines how IE applies.

  3. When will exists but consent does not, treat will signals as morally weighty and override only under least-imposition necessity with safeguards.

  4. When consent is absent, use prior will → signals → constrained proxy → best-interest + least-imposition.

  5. Under uncertainty, be precautionary for irreversible harms, but require independent review to prevent abuse.

  6. Do not let “patienthood claims” become a tool for coercion.

9) Template: Moral Patient Assessment (MPA) — 1 page

  • Entity type/context:

  • Evidence of conscious experience (tiered):

  • Evidence of will/preferences (signals):

  • Consent capacity (full/limited/none/uncertain) + domain:

  • Category (1/2/3/4/5) + justification:

  • Proxy availability and constraints:

  • Risk of wrongful inclusion/exclusion:

  • Precaution level (low/medium/high) + why:

  • Recommended handling (least-imposition actions, reversibility, safeguards):

  • Review date / evidence triggers:

The Neurology of Non-Culpability: Morality as Mechanics

If we accept that morality is the objective reduction of involuntary imposition, we must eventually confront the physical machine that processes these impositions: the human brain. For centuries, we have operated under the assumption of "blameworthiness"—the idea that an "evil" person chooses to do bad things through a ghost-like faculty called free will. However, modern neurology is systematically dismantling this myth, revealing that what we call "moral choice" is actually a complex interaction of physical brain states.
 

Consider the egg-sized brain tumor found in a patient that suddenly caused uncontrollable, harmful impulses. When the tumor was removed, the impulses vanished; when it regrew, the impulses returned. In this light, the "evil" was not a choice; it was a physical malfunction of matter in motion. Just as we do not blame a rock for falling on someone, we are beginning to realize that a brain predisposed to harm is a victim of its own architecture—whether that architecture is shaped by a tumor, a chemical imbalance, or a genetic predisposition for psychopathy.
 

This realization shifts our entire understanding of justice. If we could invent a pill that opened the pathways of empathy in a brain where they were physically closed, the "criminal" would suddenly feel the weight of their actions, becoming a person of remorse and compassion. At that point, the "bad person" we wanted to punish disappears, replaced by a patient who was cured of a biological defect. If a physical intervention can "fix" a moral character, then morality was always a physical state to begin with.
 

This does not mean that harmful actions are "okay"; it means they are moral tragedies. In Imposition Ethics, we stop focusing on the culpability of the actor and start focusing on the imposition on the victim. A person killed by a psychopath is just as dead as a person killed by a falling rock. Both are tragedies, and both are impositions on the will of the victim. The only difference is that we have the technological potential to "fix" the psychopath’s brain, whereas we are still learning how to influence the rock.
 

As we move forward, morality will evolve from a system of "crime and punishment" into a system of "diagnosis and treatment." We will recognize that every involuntary imposition—whether caused by a person, a system, or nature—is an objective evil that must be mitigated. By viewing morality through the lens of neurology, we align ourselves with the truth of our physical existence: we are not "sinner" or "saints," but physical systems that either assist or impose upon one another. Our goal is to use our growing knowledge to engineer a world where these physical malfunctions no longer result in the frustration of a conscious will.

Contact
Church of the Best Possible World
Sunday Service 10am-2pm CT
Sign Up

Stay connected and join the movement:

Thanks for subscribing!

© Copyright 2023 Church of the best possible world. All Rights Reserved
bottom of page