Representing moral judgements

A bit of formalism

X = set of options, x, y, z = actions. K = set of contexts (moral choice situations).

K ∈ K is a possible decision problem. [K] ⊆ X available options.

R - rightness function: K -> 2^x (for each context, which actions are permissible).

Why do we need a "representation" of R?

We could just have a lookup table, but it's big and kind of hard... It's not feasible in practice though. Also moral learning would be very hard in practice.

Consequentialization approach

We don't remember R as a table, we remember a ranking on X (that we develop into a utility function U(x)) and then:

R(K) = argmax_[x ∈ K] U(x).

However, this only works if the preferences are the same regardless of context, which doesn't seem to match our moral intuitions. For example norms of politeness: pick a piece of cake (must not pick the biggest unless there's no other choice).

It's still quite data-heavy.

Reasons-based approach

Represent R as (1) which properties of action x ∈ X are relevant in a context K and (2) a criterion for comparing the bundles of properties.

Option-context pairs (x, K) might have properties P ∈ P.

Types of properties: - Intinsic property x if (x, X) ∈ [P] <=> (x, X') ∈ [P] -- depends only on property. - Context property depends only on context. - Relational property -- others.

Reasons structure: R = (N, >=) where N determines the relevant properties and

= compares them. - Example: utilitarian approach N(K) = [K], >= = ordering of U(x).

Then:

N(x, K) := P(x, K) ∩ N(K)

where P(x, K) are all properties of (x, K).

Right(X) = { x ∈ [K] | ∀x' ∈ [K] N(x, K) >= N(x', K) }

Taxonomy

Some structures are much easier to learn (regular: monist, atomistic, etc.).

Teaching

We like simplicity, extrapolability and correctness.

Website: http://personal.lse.ac.uk/LIST/

Q/A: - What about representing virtue ethics? - Virtue = constraints on permissible reasons. Reason structures that produce the same outcomes might still be different from virtue ethics point of view because reasons are important there. - What about properties that are desirable in come contexts and undesirable in others? - How much access to our own resons structure do we have? - Dunno, but maybe some. But we can still learn it. - How do we think about uncertainty about the consequence of the actions? - Options could be lotteries and we can have relevant properties like expected utility, etc. - Options can also be ... - Agent-relative properties (include agent into context to represent general morality).