top of page

Note (provisional status)

This Moral Patient Boundary specification is provisional and intended as practical conceptual infrastructure for applying Imposition Ethics (IE) to questions of moral standing (who counts as a morally relevant agent, and how degrees of agency affect evaluation). It is not a sealed or authoritative specification, and it may contain omissions, edge cases, and implementation assumptions that should be revised as experience, criticism, and better evidence accumulate.

Use this as iterable infrastructure:

  • adopt what is useful,

  • test it against real disputes (animals, infants, dementia, coma, AI claims, future persons),

  • document failure modes (false positives/negatives, abuse incentives, measurement overreach),

  • and update rules when better science or clearer operational proxies become available.

No part of this boundary work is moral “permission” to impose. It only determines who is in scope for IE’s imposition/assistance evaluation and how uncertainty is handled.

Moral Patient Boundary

Defining Morally Relevant Agents in Imposition Ethics (IE) — Provisional v0.2 (Copy-Ready)

1) Core boundary rule (your criterion)

A morally relevant agent (moral patient) is anything with conscious experience.

  • If an entity has phenomenal experience (“what it is like” to be it), it is in scope.

  • IE’s core evaluation then applies:

    • involuntary imposition on its will (if it has will) is immoral,

    • voluntary assistance of its will (if it has will) is moral,

    • and even where “will” is minimal or unclear, experience grounds moral relevance.

2) Two-layer model: Experience vs Will (clarifies edge cases)

IE treats moral standing as grounded in conscious experience, but moral evaluation often depends on will (preferences/intentions/boundaries). These are separable.

Layer A — Conscious Experience (Standing)

  • Determines whether the entity matters morally at all.

Layer B — Will / Agency (Type and Degree of Moral Relations)

  • Determines what kinds of imposition/assistance are possible and how to interpret consent.

This avoids a common confusion:
An entity can be morally relevant (has experience) even if it has weak, undeveloped, or hard-to-measure “will.”

3) Spectrum of morally relevant agents (experience-to-agency continuum)

You specified the spectrum explicitly: from minimal experience with near-zero “will” to an omniscient omnipotent God.

The spectrum (conceptual endpoints)

Endpoint 1: Minimal experiencer

  • Has conscious experience but:

    • no discernible desires,

    • no stable preferences,

    • no planning or “mental processing power” beyond bare experience.

  • Moral relevance: yes (experience exists).

  • Will-imposition analysis: limited; focus shifts to protecting experience from negative imposition states (pain, distress, deprivation) and avoiding invasive manipulation.

Endpoint 2: Typical human adult

  • Rich experience + robust will formation + explicit consent capabilities.

Endpoint 3: Maximally capable agent (e.g., God-like)

  • Experience + maximal will + maximal knowledge/power.

  • Moral relevance: yes.

  • Special implication: consent, coercion, and vulnerability may be radically different (a being that cannot be constrained may be hard or impossible to impose on). IE evaluation becomes sensitive to whether imposition is even conceptually possible.

Practical point

IE does not require a sharp metaphysical “line.” It needs:

  • a standing threshold (any conscious experience),

  • and an operational agency classification for real-world decisions.

4) Operational classifications (for real policy and disputes)

To apply IE without pretending we can solve consciousness perfectly, classify entities into operational categories.

Category 1 — Clear conscious experiencer + clear will + consent-capable

Examples: most adult humans; many cognitively typical adolescents; others with reliable preference expression and informed consent capacity.
Handling: normal consent-based analysis.

Category 2 — Clear conscious experiencer + will present + not consent-capable (added)

Definition: beings who demonstrably have preferences, intentions, and boundaries (“will”), but lack the intellectual/cognitive capacity to give informed, specific, revocable consent.

Includes (explicit):

  • Children (especially infants, toddlers, and younger minors; older minors vary by domain)

  • Mentally disabled persons with impaired consent capacity (domain-specific; can vary over time)

  • Nonhuman animals (generally: will present behaviorally; consent rarely expressible in the human contractual sense)

Handling (IE default):

  1. Treat them as full moral patients (experience grounds standing).

  2. Treat “consent” as limited, domain-specific, or unavailable.

  3. Use a constrained substitute decision protocol:

    • Known will signals (behavioral preferences, aversions, stable routines)

    • Least-imposition protection of boundaries (avoid restraint, confinement, invasive procedures unless necessary)

    • Best-interest + least-imposition when will is ambiguous (minimize severe negative experience; maximize reversibility)

  4. Require anti-abuse safeguards for proxies/institutions:

    • no retaliation leverage,

    • independent review for high-imposition actions,

    • time bounds and periodic reassessment,

    • repair obligations where wrongful imposition occurs.

Special rule (non-consent-capable will-bearers):
When their will is knowable (signals, patterns, distress/avoidance), treat overriding it as imposition unless it prevents greater imminent imposition (e.g., stopping self-harm or severe danger), and then apply minimum necessary force with repair.

Category 3 — Clear conscious experiencer + limited/uncertain will

Examples: severe delirium states; some advanced dementia states; certain impaired states where preference signals are unreliable.
Handling:

  • treat as moral patients,

  • consent typically absent,

  • rely on best-interest proxies plus minimal imposition and reversibility.

Category 4 — Likely conscious experiencer (uncertain) + unclear will

Examples: borderline animal cases; disputed clinical states; emerging AI claims (if ever).
Handling: apply the uncertainty protocol (see §6).

Category 5 — No evidence of conscious experience

Examples: rocks, most artifacts, non-sentient software, dead bodies.
Handling: not moral patients under the core boundary rule (though they may matter instrumentally due to effects on moral patients).

5) Consent and “will” when will is present but consent is not (expanded)

Since IE’s classic terms are “will” and “consent,” you need a rule for cases where:

  • will is present,

  • but informed consent cannot be expressed.

IE rule: Substitute decision hierarchy (consent absent or invalid)

Use this priority order:

  1. Known prior will (advance directives; previously stable preferences)

  2. Reliable present signals (behavioral cues; consistent expressions; avoidance/distress)

  3. Authorized proxy with constraints (guardian/medical proxy)

  4. Best-interest + least-imposition default

    • minimize severe negative experience (pain, terror, deprivation),

    • maximize reversibility,

    • avoid irreversible body/agency invasions unless strictly necessary for survival/safety.

Proxy constraints (required)

A proxy decision must:

  • be evidence-tracking (reassess as signals change),

  • be least-imposing among feasible options,

  • be time-bounded for restrictive measures,

  • be reviewable/appealable by independent parties (institutional setting),

  • avoid conflicts of interest (no proxy benefit from the imposition).

6) Consciousness uncertainty protocol (prevents abuse and overreach)

Because “who is conscious” can be weaponized, apply a disciplined protocol.

Step 1 — Evidence basis (allowed kinds)

Use convergent indicators (not a single claim):

  • behavioral responsiveness and learning

  • nociception/pain behavior (where relevant)

  • neuro/physiological correlates (in animals/humans)

  • functional integration and flexible goal-directed behavior (limited relevance; not sufficient alone)

Step 2 — Precaution rule (default under uncertainty)

If there is a non-trivial chance an entity is conscious, treat it as provisionally in scope when the cost of being wrong is high (irreversible harm, severe suffering), but avoid:

  • invasive surveillance,

  • unnecessary restriction of humans,

  • or creating perverse incentives.

Step 3 — Anti-abuse rule (prevents “fake patient” manipulation)

Claims that an entity is a moral patient must not be used to:

  • evade accountability,

  • block oversight,

  • extract resources via coercion,

  • or impose on others without independent review.

For institutional decisions, require:

  • independent assessment (ethics + evidence office),

  • public documentation of the evidence tier,

  • time-bounded provisional status.

7) Special cases (common objections)

7.1 Children

  • Moral patients: yes (experience).

  • Will: present and developing; consent capacity is domain-graded.

  • Default: treat as Category 2 (will present, not consent-capable) unless domain-specific competence is established.

  • Strong constraints against coercive harm; strong duty to protect; high scrutiny for irreversible impositions.

7.2 Mentally disabled persons

  • Moral patients: yes (experience).

  • Will: often present; consent capacity may be variable by domain/time.

  • Default: Category 2 when consent capacity is impaired; Category 1 when competence is established for the decision domain.

  • Require periodic reassessment; avoid blanket removal of agency.

7.3 Nonhuman animals

  • Moral patients insofar as conscious experience exists.

  • Will often expressed behaviorally; consent rarely expressible in human-contract form.

  • Default: Category 2 (will present, not consent-capable).

  • Apply: minimize suffering; avoid confinement/restraint where feasible; justify constraints as least-imposition under guardrails.

7.4 Coma, anesthesia, disorders of consciousness

  • If no conscious experience: not currently a moral patient, but:

    • prior will and future potential can matter via previously expressed preferences and effects on other moral patients.

  • Under uncertainty: use precaution for irreversible choices.

7.5 Future persons

  • If they will have conscious experience, policies can impose on their future wills/experiences.

  • Treat as morally relevant in long-horizon domains using minimax/anti-concentration across time (avoid dumping catastrophic burdens on the future).

7.6 AI systems (if/when claimed conscious)

  • Do not assume consciousness from intelligence alone.

  • Apply the uncertainty protocol; require independent review; prevent abuse.

  • If consciousness is credibly established, they become moral patients; if not, they are instruments affecting moral patients.

7.7 God-like agent

  • Moral patient: yes if conscious experience exists.

  • Imposition analysis depends on whether the being can be constrained at all.

  • If truly omnipotent/omniscient and cannot be forced, many “imposition” relations may be non-applicable; evaluation shifts to what that being imposes on others.

8) Practical decision rule summary (mini-rules)

  1. Experience sets moral standing.

  2. Will/consent determines how IE applies.

  3. When will exists but consent does not, treat will signals as morally weighty and override only under least-imposition necessity with safeguards.

  4. When consent is absent, use prior will → signals → constrained proxy → best-interest + least-imposition.

  5. Under uncertainty, be precautionary for irreversible harms, but require independent review to prevent abuse.

  6. Do not let “patienthood claims” become a tool for coercion.

9) Template: Moral Patient Assessment (MPA) — 1 page

  • Entity type/context:

  • Evidence of conscious experience (tiered):

  • Evidence of will/preferences (signals):

  • Consent capacity (full/limited/none/uncertain) + domain:

  • Category (1/2/3/4/5) + justification:

  • Proxy availability and constraints:

  • Risk of wrongful inclusion/exclusion:

  • Precaution level (low/medium/high) + why:

  • Recommended handling (least-imposition actions, reversibility, safeguards):

  • Review date / evidence triggers:

Contact
Church of the Best Possible World
Sunday Service 10am-2pm CT
Sign Up

Stay connected and join the movement:

Thanks for subscribing!

© Copyright 2023 Church of the best possible world. All Rights Reserved
bottom of page