[What follows is abstract, perhaps to the point of uselessness. It is what Venkatesh Rao calls ‘refactoring’, something of an exercise in unpacking moral questions by translating them into a very different kind of question. If that kind of thing upsets you, you may want to stop here.]
At its simplest cellular automaton is a grid of cells (think pixels) that are in one of two states (on or off), and that changes over time following certain simple rules that apply to each cell, e.g. if at time t1 a cell is on and x neighbors are on, turn the cell off at t2. This simple setup can produce surprising and complex results, and as a result it has gotten a lot of interest from people studying complexity and emergence.
We can also make the cellular automaton more complicated: adding more states, abstracting from a regular grid to a highly connected network, and using more elaborate rules. The idea is the same: cells change over time following rules, and complex behavior emerges.
Now consider society as such a system, with each person as a node in a highly connected network, each with many many possible states. Further, take morality to the be the set of rules that govern each node. Different nodes have different rules, but that’s OK, so long as nodes’ state transitions are governed by their rules. This is the moral automaton.
A few questions come to mind:
- Do we care about the moral automaton, as distinct from the nodes (i.e. is the automaton itself a moral patient)?
- What is the best set of rules, if we assume everyone will use the same set? (I leave ‘best’ undefined here, so import whatever that is from your rule set… er, moral system)
- What is the best set of rules for a node, given uncertainty about what rules other nodes will follow?
Thinking of morality and social interaction in this way seems to pose a problem for consequentialists of any type. The simple cellular automata we started with are known to behave unpredictably. For example, see the behavior of elementary cellular automata. The patterns that emerge are hard to predict from the rules, and in some cases the patterns themselves are impossible to predict, as the rules are capable of universal computation. If moral rules act this way in society, the result for the moral automaton is likely impossible to predict, and depending on the rules the outcome may be unpredictable in principle. What then can the consequentialist say in defense of any rule set? This is not the only way to raise this objection, but this framing gives us mathematical certainty that consequences cannot be predicted even in principle.
But perhaps the most interesting question to me is how the node relates to the moral automaton. Should rules be chosen by appeal to the global behavior of the automaton? Is morality about the well-functioning of the automaton (which is roughly my claim here, if put into different terms)? We can’t predict outcomes, but we can observe the global behavior of the automaton and see if it’s producing the right kind of chaotic output: processing information, generating novelty, etc. Since boring outcomes like “all cells off” are easy to predict, is chaotic output at the global level the best we can aim for?
Morality is ultimately grounded on intuition, so it might not be possible to think about it using an unintuitive construct like the modal automaton. But it’s also hard to say that the moral automaton doesn’t accurately capture an aspect of morality that is underexplored by the major schools.