John Robinson's pages on
A Doubtful Utopia
Designing the ideal state is easy if you know what 'good' means and if you're omnipotent. You simply figure out how to get the maximum amount of good (that is, how to get the ideal), then enforce your solution. Possible ideal worlds include:
Dutopia The only good is a good will. Everyone follows the categorical imperative.
Futopia Progress is all. Everyone is engaged in going further, faster and better.
Trutopia There is objective reality and the ultimate value is the pursuit of truth.
Cutopia The state exists to bring forth beauty.
Glutopia The ideal is social cohesion. Everyone lives for the communal good.
Mootopia A minimalist city of cattle (c.f. Plato's city of pigs).
Brewtopia A state of constant Dionysian revelry.
Leibnitz thought that God had the necessary qualities of knowing of what good means and omnipotence, so his design, the world, must be ideal. As a logical claim, this is hard to fault; empirically, as Voltaire pointed out, it's ridiculous. Creators of theoretical and fictional Utopias are godlike, because in their own worlds they rule on what good is, design their state, then make sure their citizens do the right thing. By their own standards they depict a place better than our own, so they out-God God. Dystopian writers, on the other hand, create states based on conceptions of goodness that their fiction then challenges. Dystopians are skeptical about ideals, whereas Utopians embrace them. At least, earnest Utopians embrace ideals. Utopia and irony, being at seemingly opposite literary poles, attract each other. A superficially Utopian vision may be deliberately self-undermining. It's an open question how far writers like Plato and More intended their creations to provoke not agreement, but reaction, spurring their readers to doubt the very possibility of an ideal state. This ambiguity is one reason why their Utopian writings are more stimulating, and, perhaps, more effectual, than earnest, irony-proof, efforts like Smallwood's.
But let's suppose that you are an earnest Utopian who doesn't know what good means (or at least, you're not sure), and you're not omnipotent. That is, you want to design the ideal state, but you see your intuitions about goodness as provisional, and you're aware that getting people to acquiesce to your form of state might be tricky. What do you do? I claim that even from such a tentative position, it's possible to design a Utopia. This paper explains how. Its title means, in the first instance, 'A Utopia born of doubt' - a Utopia that acknowledges the uncertainty of goodness and the vagary of real human beings. Beginning from this sceptical position means that most of the paper is groundwork - establishing constraints and criteria for possible solutions to the Doubtful Utopia design problem. I do this through twelve assertions, some of which limit the possibilities for the ideal state, while others open up unexpected alternatives. These are discussed through examples both speculative and historical. Reckoning that it is more exciting to offer a proto-Utopia than just lay the groundwork, in the final pages of the paper I select some particular assertions to combine and point towards a solution. By the end, a partial Utopia emerges, still several institutions short of a state, but defined in its essential character.
The Utopia Design Problem
The problem of Utopia, expressed as three assumptions and one question, is:
(a) If there should be a state
- and it is possible to argue that there should not - such is the view of the anarchist,
(b) and supposing the state can and does regulate the actions of its members
- with that regulation being codified and executed within a legal system (here called the 'rule of law'), expressed in social behaviour, ritual and tradition (here called 'custom'), and justified by a value system (here called 'morality'),
(c) and if the form of the state can be designed rationally
- as opposed to being an emergent phenomenon not subject to design (like, for example, the community of bees in a hive),
(d) what is the ideal form of state?
If the assumptions (a), (b), (c) are accepted, then (d) is not just a question for Utopian dreamers but a practical concern of political theory. But the assumptions have to carry a lot of weight. Here, I am going to accept (a) and (c) without discussion. Assumption (b) is, perhaps, just an explication of (a), but it draws attention to an essential part of the problem. Moral value, custom and the rule of law pervade the practical, weighty questions of Utopia, like: Is the possession of private property just? How is it acquired and transferred? What are the duties of the individual to the state? To neighbours? To offspring? Are criminals to be punished? Are victims to be compensated? Assumption (b) summarizes the idea that the form of a state rests on suppositions about value - that answers to large questions about morality, justice, community and persons are embedded in it. Perhaps, then, we have to tackle the full problem of the Good to tackle Utopia.
But why should the idea of the Good and its moral cognates be pre-suppositions of the design? Why not assume that morality and the idea of the good are themselves designed rather than discovered? This is the explicit view of relativists, and of others, like John Mackie, who believe that there is no absolute, objective morality waiting to be discovered. But it is also implicit in almost all secular moralities (the exception is Kant's), which are biologically contingent. Thus utilitarianism deals with pain and pleasure which are emergent phenomena of sentient life on earth, but perhaps not necessary qualities of life or rationality. It is possible to imagine rational beings without pain sensitivity. Similarly, non-Kantian contractarianism sees morality as a social response to the contingency of different organisms having different conciousnesses and impulses that are self-directed. We might therefore say that these moralities are invented as responses to the particular circumstances of life on earth.
If good and right are products of invention, the Utopian designer has the freedom to adjust the moral framework as well as the social structure. Indeed, the design of the morality and the design of the state can be coupled so that neither has precedence. To go further: we can allow ourselves to accept, establish and promote any moral values whatsoever. We may, for example, allocate to persons unequal holdings, suffering or freedom without providing reasons. We begin with a blank moral slate.
But there is a worrying circularity if 'ideal' in 'ideal state' is judged solely on the basis of something - 'goodness' - that is itself a product of the design. 'Goodness' could be defined as no more than profusion of bananas, or length of fingernails, or self destruction, and 'ideal' would simply be the maximization of these arbitrary qualities. We have to have some seed concept of goodness that interrupts the circularity and prevents degeneration towards arbitrary 'solutions'. Such a concept should be limited enough to exclude moral presuppositions. The very fact that we are designing an ideal state suggests what this seed concept should be.
To get to Utopia via design means specifying a problem, then following a process that leads to a solution. The problem includes the fact that the designer isn't omnipotent, and the process relies on design values. These values are rational and amoral concepts like coherence, parsimony and stability (of which more later), but not moral prescriptions like those of the various utopias mentioned above. According to the seed concept of goodness then, the 'ideal' state must solve the problem according to these design values - values that prevent the design process from thwarting itself.
Therefore let us set up the problem of Utopia in a new way:
(A) Given a problem: the creation of an ideal society, where 'ideal' means the optimal quantity of some quality 'good', initially undefined,
(B) Using a reasoning process called 'design' which attempts by rational means to proceed from a problem to a solution,
(C) Constrained by relevant empirical knowledge about human beings and their social behaviour,
(D) Find two dependent structures: (i) Morality, according to which Good will be defined (and thus also 'Ideal') and (ii) The Ideal State.
In this problem statement, 'good' starts undefined in (A) and is grown during (B) from the seed concept of design values. The morality that emerges in (D) cannot contradict the values (coherence, parsimony, etc.) that are used to seed the design. But it will go beyond those values in applying to the state, custom and law. And the designed state will be ideal according to the whole morality so developed.
Step (C) requires us to be clear about the subject matter of the problem: it concerns human beings and the way they interact. It would be foolhardy to attempt a design on the basis of just one characteristic of humanity such as its rationality. We need to know as much as possible about humans as animals and persons, and about the dialectic or tension between the form of the state and the individual consciousness. Anthropology and social psychology give some insight into this relationship, most encouragingly, that social context has a huge effect on individual behaviour, and most portentously, that the consequences of a particular social intervention are often unpredictable. The upshot is that the ideal state will manipulate the beliefs, morality and behaviour of its citizens whether or not this is planned, but that intentional manipulation at any level of detail will be problematic.
We now need to explore the details of (B) and (C), first by specifying what the design values are, then by considering the properties of human beings, who will be citizens of the ideal state.
I present design values - the 'seed concept of goodness' - as five assertions.
Assertion 1 Coherence
An inherent value of any design process is coherence.
Since design is understood to be rational, the process of design must be coherent - that is, non-contradictory. Similarly, the product of design (in this case, the ideal society) must be coherent. Were it not, we would have arrived at an incoherent solution through coherent steps, which is impossible. Coherence is a sufficient condition for communicability, meaning that the design process and the product of design can be explicated precisely. It is not, however, required that the design be communicated to the inhabitants of the ideal state. It may be possible to establish and perpetuate the solution without explaining it to the people who participate in it.
This assertion immediately rejects the following as possible supreme moral values in Utopia:
· A belief in the impossible.
· A duty to default on obligations.
· A desire to purge all desire.
· A will to powerlessness.
Moral rules such as
· Everyone should do whatever they like
are rejected by coherence if it is (contingently) true that sometimes people's desires conflict. I shall discuss this further in later assertions regarding human nature.
Assertion 2 Parsimony
An inherent value of any design process is parsimony.
This is an epistemological doctrine which does for design what Occam's razor does for analysis. It says that if two design steps or two design solutions are equivalent in every way (including full applicability to the problem), except that one has additional complications, then the simpler case should be adopted. Without this assertion, the number of possible designs would multiply without limit.
This assertion rejects many of the dystopias found in fiction. For example, imagine two states, both alike in beauty, peace and happiness. The only difference is that one has a yearly ritual of choosing one of its citizens by lot for torture or execution. Of these two states our design process would choose the one without the human sacrifice because it is simpler. If there were a third state where the yearly ritual chose one of its citizens to receive special privilege, this too would be rejected as non-parsimonious. Of course, as happens in fictional contexts, the fate of the chosen one may be essential to the well-being of the whole, but until we have some reason why this should be, we must apply the parsimony rule.
Parsimony favours solutions whose implementations are simple. Consider the following two rules:
· Everyone should try to make everyone as happy as possible.
· Everyone should try to make everyone smile as much as possible.
If all else were equal (which, in this case, we can be pretty sure it is not), the parsimony rule would favour the second of these over the first because the required outcome is so much simpler to measure.
Assertion 3 Stability
An inherent value of a design process aimed at a non-transient solution is stability.
We have not said previously that we desire the result of the process to be stable or sustainable. Yet it seems to be implicit in the requirement for an ideal society, since a society has a temporal existence. Therefore we desire a stable design process and stable solution. This emphatically does not mean that the solution must be static: a dynamic solution may be stable, if it adapts in such a way as to preserve the form of the solution. Change is allowed by stability, but the capacity for self-destruction is not. To give an illustrative example without prejudging solutions, in a pure democracy it is logically possible for the people to vote to install a tyrant and thus cede their democratic role to another form of government. Therefore, pure democracy cannot be a necessary part of any stable solution. On the other hand, we can imagine a solution in which democracy is a starting point but the solution allows for change, revolution and so on, much as Marx's model of social and economic history moves from structure to structure within a single theory. Equally, we can imagine a limited democracy in which a right to change the form of government is denied to the people. To some extent systems of constitutional law are established to limit democratic rights and guard against such instabilities.
Asserting stability as a design value therefore has an important implication for the Utopia design problem. Solutions must be robust to individual and group deviance. Long-term stability demands either that damaging deviance be contained, or that deviance be allowed (within the design) to set up new norms for the society as a whole when it becomes powerful enough.
Stability rejects a large sphere of possible moralities. In particular it rejects
· Everybody should strive to do whatever they like
As mentioned earlier, the reduced version of this rule 'Everybody should do whatever they like' is ruled out on coherence grounds because as soon as a single person's desire conflicts with another person's desire the rule cannot be fulfilled. But this expanded version is coherent, since it speaks only of striving, not succeeding. However, there is no reason why this rule should be passed on to others and thus perpetuated. Instead, there is good reason not to communicate it: our desires will sometimes conflict; since I am striving to realize my desires, I'd prefer it if you weren't striving to realize yours. Thus the moral rule will not be preserved. A state 'initialized' with this rule will degenerate into Hobbes' state of nature: 'Everyone does strive to do whatever they like', a statement which has no moral content.
Assertion 4 Closed and open definitions
The definitions of the terms used in the problem statement should be regarded as closed. Therefore any designed state should have the generally understood characteristics of a state and any designed morality should treat the sphere of action that morality generally treats. Terms not used in the problem statement are open to redefinition, that is, reinterpretation in the light of the design.
The first half of this assertion prevents solutions that sidestep the inherent properties of states and morality. For example, it rejects the design:
· The ideal state is one with no people in it
because it is generally understood that states contain people. Further this assertion rejects the anarchist solution:
· The ideal state is one that puts no constraints on its members
because such a state is not really a state at all. Similarly, the assertion ensures that the designed morality will have something to say about how people should behave, something that is not necessarily tied to pursuit of their own interests. This does not necessarily mean that morality must provide the same rules to everyone, merely that it provides an answer to each person regarding what that person's moral obligations are.
The second half of this assertion is a spur to freedom in design. We do not have to accept the conventional definitions of all terms, if our design makes them obsolete. For example, suppose we decide that in our Utopia, everything will be owned by the state. We may allow that individuals have entitlements to certain objects, but this does not correspond to ownership in the conventional way. Nonetheless, we may still use the word 'own' to refer to clusters of entitlements. Thus, if I say 'I own this object', I mean I have certain prescribed entitlements with respect to this object which is owned by the state. Then 'I own this land' means 'The state owns this land, but I am entitled to use it, subject to some restraints'. 'I own this piece of coal' means 'My entitlement includes the right to destroy this piece of coal to provide heat', whereas 'I own these $100 bills' may not include an entitlement to destruction, but only to discretionary spending. Similarly, 'I own my body' may or may not include entitlements to do various things with it. With this redefinition of 'own' it may become natural to say 'I own my children', since the entitlement to which 'own' refers may change with time. 'I own my children' means I currently have certain entitlements to them, for example, the right to insist they live with me until a certain age, so long as I do not mistreat them.
Assertion 5 Universality of process
The design process for an ideal state may legitimately follow processes developed for general-purpose problem solving.
Clearly we need more design methodology than the first four assertions provide. They specify limitations on design, but do not explain how to do it. Therefore I assert that we can apply problem solving schemes that have been developed in other spheres.
Problem solving as practiced in the several branches of engineering, medicine, industrial design and law normally proceeds through four phases: analysis of the problem domain (including needs analysis, specification, identification of constraints and criteria), derivation of a solution (including generation and analysis of candidate solutions, solution selection, and optimization), implementation, and proving (including test, verification and validation). This process has analogues in many spheres that require systematic application of creativity, ranging from artistic performance and business practice, to animal husbandry and criminal investigation.
The important design phase for this paper is the derivation of a solution. Standard design practice is to open up the space of possible solutions as much as possible, so that many alternatives can be considered. This increases the probability that a good candidate will be selected. Some alternatives have already been suggested and rejected directly because of conflict with design values. We will see that others will be rejected by the constraints of designing for humans. There will, however, be a great many possible solutions that are not directly rejected. Each is a candidate Utopia, which in a full design, would be compared with the others in order to select a shortlist. These would then be optimized and one finally chosen as the ideal. In this paper there is space only to consider one line of thought from the design values and constraints, which I will do below in the section 'Generating a Solution'.
The preceding assertions have to do with the design process and the form of the Utopia problem. They specify procedural and structural limits for the design, but do not say anything about the problem's practical content, which is how to bring people together into a state. I now address this need for contextual constraints by making empirical assertions regarding human beings.
Assertion 6 Interests
Humans believe they have interests, which they desire to realize.
Interests include: the prolongation of life, the avoidance of pain, the experience of happiness, the exercise of autonomy, the pursuit of other-directed objectives in accordance with personal belief.
It is possible (though I don't rely on this) that all human value can be summed up as success in realizing desired interests. Both the egoist, who looks selfishly inward, and the altruist, who looks selflessly outward, desire fulfillment of interests, and their realization is the consummation of that desire.
This assertion does not establish any moral framework, for it says nothing about whether the desire of humans for realized interests should be of moral concern or an interest of the state. It does however exclude certain kinds of solution that are not anthropologically aware. For example, rules like
· Everyone should suppress their beliefs about the subjective
· Everyone should suppress their desires
are rejected by this assertion which claims that belief in interests and desires towards those interests are inevitable in humans.
Assertion 7 Actions
At least some human actions are intended to increase the probability that that person's desired interests will be realized.
This assertion could be revised to express human agency more emphatically: Humans act to realize their desired interests. But I am not sure that this more forceful version is true. Even the adopted form of the statement assumes will, intention and agency, and so begs at least some questions about human personhood. But I cannot see making much progress towards the 'sphere of action that morality generally treats' (Assertion 4) without it. I need assertion 7 to establish a connection between interests and action. It is a very powerful assertion because it rejects moral rules like the following:
· Believe whatever you like, but act only in accordance with (some specified) morality.
Many of us carelessly believe in a rule something like this. But if a person can believe anything, they can certainly believe in interests that conflict with any specified morality. This assertion says that sometimes they will act in accordance with these interests, and thus against the morality. For example, someone may believe not only that the world is controlled by a camp of happy blue aliens from Mars, but also that conformity to the will of the aliens is the ultimate good. This belief is an interest (assuming 'good' here is used in a fairly conventional way as something to be desired). Because this person will (at least sometimes) act in accordance with the interest, there is absolutely no guarantee that those actions will be in accordance with the specified morality.
Assertion 8 Origin of Interests
Human interests are defined and desired according to:
· the probability of their realization,
· human nature, viz. the instincts, passions, emotions,
· metaphysical beliefs,
· other interests.
This assertion draws attention to the multiple sources of desired interests and implies that disentangling these may be problematic. Some interests, like the universal interest in having enough to eat, are explained simply and biologically. But more complicated interests seem to involve a hierarchy of relations between these sources. It is thus tempting to arrange interests into levels so that sophisticated interests like sporting prowess, romantic love, and political success are placed higher than eating, sex and resting. At this stage in design, we want to avoid constructing a hierarchy if it implies a particular relative valuation of the different interests. However, a hierarchy can be useful if it helps to show how a particular human allows these sources to control their interests. An example is when religious conversion reorients a person's worldview. Their interests become reshaped by their new metaphysical beliefs, so that, for example, they sacrifice a physical pleasure for more 'spiritual' contentment. In such a person, the metaphysical beliefs exert influence 'downwards'. To the starving person, on the other hand, metaphysical beliefs may be a luxury that they are fully prepared to adjust in order to meet their more basic (but, in terms of influence, now higher level) interests.
The list of examples I gave under Assertion 6 (prolongation of life, avoidance of pain, etc.) are all provisional and therefore potentially subject to revision. If, for example, prolongation of life becomes impossible, the human can revise their set of desired interests to exclude it. (It may still be a desire, just as invisibility and teleportation may be desired, but that it can be relinquished as an interest is a remarkable human ability.) Similarly, as I've suggested above, the precedence of a person's interests can change. These factors lead directly to:
Assertion 9 Indeterminacy
The interests of any person, and the interactions of those interests, are indeterminate and unstable.
Here I make what looks like a profound statement about necessity, free will, and the philosophy of the mind. But like all my assertions about humans, it's really just an empirical observation. You can never be sure what another person's interests are. What's more, you can't be absolutely sure about your own set of interests. Although you desire them, and you therefore have a much clearer picture of them than anyone else does, the fact that they interact and that their sources are constantly presenting new information, means their precedence is shifting and uncertain, even to you.
Now it may happen someday that we can look inside a mind and read off that person's interests. If I were spiritual I might fear that event as the day that true humanity or true freedom is lost. On the other hand I might look forward to it as the day when, plugged into a total immersion virtual reality experience machine, a human can realize all their interests, prevail over all challenges, and perhaps even develop interests of transcendent sophistication. Neither sentiment applies here though, for, by the time minds can be read with this degree of precision, Utopian designers will be too close to omnipotence for any of this design process to matter.
To say that a person's interests are indeterminate and unstable is not to say they are chaotic. Indeed, many interactions between people are meant to influence or even determine the other's interests. Obviously some of these succeed; otherwise there would be no advertising, advocacy, bribery or extortion. The point being made in Assertion 9 is that the complexity of interests makes all such interventions only probabilistically successful, not certain. Even when they succeed, they only modify the framework of interests, not rebuild it, and they certainly don't fortify it against new influences from other sources. Soma, indoctrination, peer pressure, authoritarianism and MTV may suppress the indeterminacy effect, but they don't extinguish it. If there were a foolproof mechanism for interest control, we could embrace it as a way of enforcing our Utopia. But theres always the risk, until we know how interests really work, that someone could develop rogue interests outside our control. We will therefore have to continue on the assumption that interests are, at least to some extent, indeterminate and unpredictable.
Assertion 10 Morality/Interest Tension
The morality of the state is unavoidably in tension with human interests.
Different people have different interests, and the interests of any one person are unstable. So there is always tension between sets of personal interests. The morality that regulates this cannot align with all the different parties. Instead it provides a depersonalized structure for evaluating the conflicting claims. Therefore morality itself is unavoidably in tension with interests.
The interdependence of law, custom, morality and the state (item (b) on page 2), means that human interests permeate all four. Law regulates the actions that result from conflicting interests; custom engenders social attitudes towards interests; morality justifies law and custom by a valuation of depersonalized interests. The state rests on law, custom and morality, so is deeply connected to the domain of (conflicting) interests.
From the state's point of view, morality/interest tension is only a problem when it leads to instability. For example, when the state's capacity to contain actions contrary to its morality is exceeded, the rule of law breaks down, leading, ultimately, to the collapse of the state. This is one mechanism by which a democratic (or partially democratic) state can be replaced by tyranny, for example by military rule. It may also be the way that fundamentalist theocracies are undermined by liberalism. A large subsection of dystopian fiction is concerned with the clash between the individual and the state. 1984, Brazil, Dark World, and, to some extent, Brave New World are examples. When the individual in these fictions is successful (e.g. in Logan's Run), the destruction of the state results. Clearly the Utopia designer is concerned to prevent threats from either individuals or groups that could be so destructive.
Thus the state must address the questions: What constitutes deviance from morality/custom/law? More fundamentally, how big will be the sphere that the state regulates (i.e. how many the opportunities for deviance)? What forms of restraint on deviance are appropriate? How will custom be used to discourage deviance?
If challenges to the state's values and structures are codified, they can become a 'cause' that transcends particular individuals and actions. If the cause is allowed to prevail, then the original state may be destroyed. A liberal democratic state may fail to give its ideals sufficient legal force to prevent their distortion and eventual overthrow. A poignant example of this is the fate of the Weimar Republic of Germany, founded on a constitution that was reckoned 'the most liberal and democratic document of its kind the twentieth century had seen, mechanically well-nigh perfect, full of ingenious and admirable devices which seemed to guarantee the working of an almost flawless democracy'. Yet it was not good enough to avert the eventual disaster of Nazism. William Shirer comments, 'Article 48 of the constitution conferred upon the President dictatorial powers during an emergency. The use made of this clause by Chancellors Bruening, von Papen and von Schleicher under President Hindenburg enabled them to govern without approval of the Reichstag and thus, even before the advent of Hitler, brought an end to democratic parliamentary government in Germany'.
Consideration of this threat highlights questions about the degree of freedom and tolerance allowed by the state. Does the state promote all kinds of free speech? What sorts of political restructuring are allowed? What are the moral bases of a constitution?
Assertion 11 Morality/Interest Alignment
The more that morality and personal interests are aligned, the more stable both will be.
Assertion 10 identified the unavoidable tension between morality and human interests. Now I claim that minimizing this tension increases stability. One approach to this is to aim for a 'small' morality. Libertarians argue that in a minimal state, morality and the realization of personal desires are as closely aligned as possible because morality demands relatively little and personal interests are defined in terms of extensive freedom. But even in such a state, morality can only regulate conflict, not overcome it.
An alternative approach for any state, minimal or not, is to create (or identify) human interests that dispose the human towards morality. Interests are malleable (assertion 8), though not fully controllable (assertion 9). A morality that provides reasons for moral action in terms of people's own interests (i.e. self-interests), may be a powerful influence for reducing morality/interest tension. It promotes alignment by suborning interests to its purposes.
The success of religion shows that a morally-determinative belief system can be established at a high (influential) level in people's hierarchy of interests. There are varieties of belief, but only two religious reasons to act morally. Both have motivating force, and both have secular analogues. The traditional Deist argument, used by More, is that God will reward goodness and punish evil in an afterlife. Thus there is ultimate justice, and the omniscience of God guarantees we will not escape it for good or ill. Fear and expectation of reward provide a strong self-interest motivation to act morally. The second religious reason is more relational and appears in revealed religions - Judaism, Christianity and Islam - that emphasize a personal God. The idea is that God has placed us according to his purpose; he is good and thus intends good for us; acting morally is the means given us to live according to his purpose. Thus by acting morally, whatever happens, we are walking in a way intended for us by God who is ultimately and finally benevolent. This is not the same as the first reason because it is the relationship with God, which permeates the whole of life - including worship and prayer as well as action - that is of final value to the person.
Religious reasons for moral action rest on demanding metaphysical assumptions. In the first case ('Do X or you will burn in hell; do X and you will receive everlasting joy'), belief in a moral God and an afterlife is required. The second case ('Do X and God will give your life true meaning') adds the belief that God responds personally to individuals, though it arguably takes away the absolute necessity of an afterlife. Both have often been successful in aligning interest with morality, and they retain considerable power.
Because we are doubtful designers, we would prefer to rest our Utopian morality on as few metaphysical assumptions as possible. The existence of God is a particularly limiting hypothesis: if accepted it's almost guaranteed to throw out implications for the state and morality that will muddy their design. We would prefer a secular morality, together with a worldly reason to act morally. Luckily, we can learn from the enormous persuasive success of religious moralities, without taking on board their metaphysical assumptions.
The two religious models suggest two broad secular categories of self-interested reasons to act morally. These are (a) contractarian reasons and (b) self-augmenting reasons. Examples of contractarian reasons are:
· Do X and people will honour you.
· Do X and you will receive reciprocated benefits.
· Do X or be punished (by the laws penalty or by customs stigma).
Examples of self-augmenting reasons are:
· Do X and you will demonstrate your true nature as an autonomous, rational agent (self-rule)
· Do X and you will demonstrate your true potential/creativity/will-to-power (self-creation)
· Do X and you will be part of something bigger than yourself (self-transcendence)
· Do X and you will accrue value to yourself, associated with some ultimate 'meaning'.
The problem with the contractarian reasons for being good is that they rest on other people's knowledge. People who do not know of your virtue/vice will not honour/dishonour you. Such was the argument of Socrates' interlocutors in their thought experiments in The Republic. Reciprocated benefits rely on there being aware and capable reciprocators, and the law's threat relies on detection and proof. Without the omniscient God of religious morality, these reasons are contingent on being found out. The problem with the self-defining reasons, on the other hand, is their perilous closeness to metaphysics - that uncertain hazard we're trying to avoid in our skeptical design. Whatever reasons we use to motivate interests towards morality, we need to be aware of these problems.
An assertion concerning the environment
Assertion 12 Environment
The ideal state will be subject to nature, acted on by its geography, climate, ecology, etc.. If it does not include everyone in the world, then it is also subject to influences (toxic and benign) from other states and persons.
By this assertion the design is made subject to natural and political contingency. Just as earlier assertions constrained it to apply to states as generally understood, and people as they generally are, this assertion sets the state and its members in a real-world context.
Nature can destroy the state. What kinds of guards against this are appropriate? Consider attacks that nature can inflict on individuals through disease. Biological science equips states to guard themselves against disease. At the same time, it opens the possibility for development of more deadly organisms and the seizure by a rogue member of society of ultimate destructive power. This is the general problem of knowledge discovery, at least in science. What should the state do? Foster science? Channel it? Restrict it?
And what of threats from other states? Is it possible to have a Utopia of pacifists? This depends on the nature of the world as a whole. Until recently, the threat of physical attack meant that a state must provide for its own defence. More's strategy for Utopia was to avoid conflict and to use propaganda weapons in preference to physical weapons. But if all else failed, the Utopians would deploy mercenaries, then, finally, their own citizens, as soldiers. But perhaps in a world where civil war has become the norm, inter-state war unprofitable and world war unthinkable, protection against outsiders is less of an issue. One could imagine that an ideal state would not need defence because it would be ungovernable by an invader.
The ideal state will have a value system. But so will its non-ideal neighbours. If these conflict and the neighbours' values attract some of the ideals citizens (by appeal to their interests) then there is opportunity for discontent, unrest and eventual instability. For example, if the ideal state has no concept of private property yet also promotes free mobility to other states, what happens when the enticements of ownership are dangled before the citizens? Could a neighbour offer incentives to Utopia's most talented people to lure them away from the egalitarianism of the ideal state? What then should the state do? Abolish its egalitarianism or establish border controls?
We see that environmental and political contingency generate many questions for the designer, just as the contingency of human interests did. Each question leads to a cluster of possible answers, and thus opens up the design space of solutions.
Generating a Solution
I have said that the design of an ideal state should be coherent, parsimonious, stable, consistent with the generally understood meaning of 'state' and 'morality', and accomplished using generally practiced design methods. I have noted that humans believe they have action-driving interests, formed from various sources, which interact with each other and with morality, and that the environment acts on the state.
From the twelve assertions about design, humans and the environment, we can generate possible solutions for the ideal state problem. As suggested in assertion 5, it is valuable to generate many candidates, for then the probability of choosing a good one is increased. Yet we don't want to generate an abundance of solutions that will quickly get rejected by another design value or the constraints of the problem. A good way to proceed in this situation is to combine the most powerful constraints into pointers to promising solutions. This is the approach I will now take, mating particular assertions to form a stronger constraint. But I will follow only one possible route through the list of assertions to one particular set of candidate solutions. There is only sufficient room for this single example here, but it both illustrates the general process and has the virtue of yielding a Utopia that seems, to me, plausible.
The stability requirement is that the solution (state and morality) should be sustainable, that it perpetuates itself. As I remarked when discussing Assertion 3, stability is not the same as stasis. Considering assertions 6 to 12, we can see that stability actually excludes stasis. The contingent nature of human interests and the placement of the state in the world means that the state must be able to adapt to changes either in its members or in its environment. We could use this to evaluate designs as follows: Consider a possible candidate state; consider the possible threats to that state (both from its members and from the environment); consider the consequence on the state of those threats being realized. If the result is the destruction of the state, or its transformation into something different from what was designed, then the proposed solution is unstable. The solution will only be stable if it adapts to the new situation, preserving its essential features. But even if we enumerate possible threats to the state (revolution, natural disaster, external attack, etc.), and try to imagine their consequences for our design, we can't be sure that we've identified the really potent threat that will eventually undermine our state. We can't be sure because human interests and the environment are indeterminate. This forces us towards the conclusion that we must value adaptability for its own sake.
Thus I take the first step towards extending 'good' beyond the design values, and assert:
Assertion 13 Adaptability
The ideal state will be adaptive - the more adaptive the better.
The italics for better in this assertion are scare italics! They let you know that I have here taken a big step towards defining morality, law and custom. This step immediately leads to chains of implications. For example:
· The state should continually be learning how to adapt itself to natural and psychological contingencies. It must therefore foster the study of natural science, psychology, politics, philosophy and so on.
Ø The state's learning must happen through people in the state learning. Therefore the state should stimulate, foster, and exploit the learning of its people. This implies the provision of universal education, though arguably resources could be directed differentially towards learners according to their strengths and weaknesses.
Ø The state should foster the study of how people learn.
· To be adaptive to changing circumstances, actions of the state should be reversible, where possible.
Ø The form of government cannot cede its power to a different form of government (for then the action could not be reversed).
Ø The form of government is a non-adaptive part of the state.
Ø The actual membership of government should be adaptive.
Ø There should be mechanisms for the appointment and removal of leaders.
Ø Power within the state should be distributed.
Ø Actions of the state should be visible so that mistakes can be identified and problems corrected.
Ø The state should establish laws that apply the principle of reversibility of action.
Ø Acts that involve destruction without replacement should be forbidden.
Ø Murder should be illegal.
· The question of the Good as a metaphysical issue will remain open. The state's valuing of adaptivity will influence the understanding of the Good, but by no means determine it. Changes in understanding of the Good may shape the state.
· The non-adaptive parts of the state and morality need to be things about which there is as much certainty as possible, for they will remain vulnerable to threat. They should therefore be as small as possible.
Ø The adaptive parts of the state and morality should be as big as possible.
Ø An appreciation for the difference between the adaptive and non-adaptive parts of the state and morality must be promulgated through morality, custom and law because:
· people should have a high level of commitment to the non-adaptive parts
· people should be encouraged to be skeptical about the adaptive parts, not only because they are subject to change, but also because this skepticism will increase the system's adaptability.
Ø People will be skeptical about the state's current concept of the Good. The state will in this sense be a Doubtful Utopia, because doubt about the state's ideality will itself propel adaptation.
· The state should avoid terminal measures against deviance, where this deviance threatens only the adaptive parts of the state.
Ø Free speech should be allowed.
Ø Neither retributive nor restorative justice is ruled out by adaptability, so the punishment of deviance is a possibility, but knowledge of the degree of adaptability on a particular question (i.e. how morality/custom/law have changed and are likely to change relative to a particular crime) should play a role in determining punishments.
Ø No punishment should be more severe than the one for a crime that threatens the non-adaptive parts of the state.
It may be interesting to list some of the things that are not implied by adaptability:
· Equality of people
· Existence of private property
· Anything about character, virtue, generosity, compassion, happiness, envy, vanity, resentment, love, sex, or rock and roll. (Drugs, however, may sometimes interfere with the state's adaptability, and are therefore likely to be regulated.)
In these areas, as in many others, the state so far designed leaves questions open. It may be that careful thought about the implications of adaptability provides answers, or that other design values and constraints, when brought together, provide answers, or that the questions should simply remain open.
Consideration of the adaptability assertion has led us to the idea that the non-adaptive core of the state and its morality should be a minimal one. But what should this core look like? Could it, perhaps, be dispensed with completely? The answer to this is no, because the whole edifice is built from design values and if these are removed we are back to the chaos of allowing any arbitrary (adaptive) morality. Also I've argued in the second chain of implications above that the need to make governmental decisions reversible means that the form of government should ultimately be fixed and therefore a non-adaptive part of the state. But could we keep a small state constitution in the core, and nothing else? I think this too is inadequate. The discussion under Assertion 11 showed how powerful metaphysical beliefs can be in orienting people's interests. For stability's sake, the state should promote reasons to act morally that can be influential at a similar high level in its citizens' interest hierarchies.
The secular contractarian reasons to be moral discussed under Assertion 11 may all be deployed to promote law-keeping, and thereby give custom and morality a contractarian tinge. Of the self-augmenting reasons, the one that seems to me most consistent with the design thus far is that by acting morally you become part of something bigger than yourself. It needs only the belief that other people are the same kind of creatures as you to infer that being part of the fulfillment of collective interests is bigger than being part of the fulfillment of just your own. The state's morality defines collective interests, so the state may promote loyalty to itself as a self-transcending individual interest. This can be summed up in a belief that the survival of the adaptive state is of supreme value. Now this is not nationalism, nor even patriotism, because it does not focus on this state, but this form of state. It provides us with an interesting conclusion to our preliminary exploration, for we now found our ideal set of values (and therefore morality) on the survival of a particular form of association. As a central belief it has both consequentialist and deontological associations. It defines a consequence - the survival of the adaptive state - as value, i.e. as utility. But the nature of this utility is that it doesn't give direct guidance about behaviour in day-to-day life; instead it establishes a duty towards the form of the state. Derived from this central belief will be a valuing of the things the adaptive state implies, for example flexibility, tolerance, inquiry and learning. Is there enough here for a system we could call morality? I think so. Would it be a morality we'd like to have? At present, the answer must be 'Doubtful', because there is so much still to consider, to learn, to decide and to design.
 See Candide, chapter 28.
 Though it's interesting to note that the word 'Dystopia' was coined by J.S. Mill, who was certainly not a moral sceptic!
 This is probably because irony is so handy for signalling doubt. Even for writers with earnest utopian sentiments, irony presents a tempting escape route when their creation looks like getting out of control.
 J R Smallwood, 'What Newfoundland Might be Fifty Years Hence!', The Evening Advocate, St. John's, Newfoundland, Issues from Jan 22 to Feb 3, 1923-24.
 Political theory adds a final part to the problem: '(e) how do we move from where we are now towards the ideal design?'
 The anarchist argument supposes that tearing down the state (and therefore the whole edifice specified in (b)) empowers the individual. It ignores the extent to which the actions of an individual outside the state must be primarily reactive, because the state's protections of agency and self-determination against the will of others and the environment are absent. The argument that rational design cannot create a state is fatalist and indirectly self-defeating. It requires identification of the mechanisms that have shaped society, proof that they are necessary mechanisms, and proof that they cannot be applied through rational design. This is as unlikely as saying that natural selection in evolution precludes rational selection in breeding.
 Both Plato's Republic and More's Utopia, design into their states certain moral beliefs (consider, for example, the 'noble lie'). Both appreciate the complexity of morality, but are confident of its objective reality.
 See Mackie's Inventing Right and Wrong, Penguin, 1976.
 But what if morality is hidden in the design values? I may not be aware of where a seemingly pure rational statement is tinged with moral assumptions. In particular, if I present amoral design steps and then say one ought to follow them, have I not made a moral statement? Even if I say that these are the amoral steps that I did in fact follow and here is the result, I still rely to some extent on your trust that I make my statements in 'good faith'. Such problems beset any system that claims to derive an 'ought' from an 'is'. But I think my approach is less vulnerable than many others, since I can demonstrably derive diametrically opposed moralities from most of my design values.
 See, for example, Lee Ross, Richard Nisbett, The Person and the Situation, Temple University Press, Philadelphia, 1991.
 For example, see Ursula K LeGuin, The Ones Who Walk Away From Omelas, in Alberto Manguel, Blackwater: The Anthology of Fantastic Literature, Picador, 1983, and Shirley Jackson, The Lottery, in R. V. Cassil, The Norton Anthology of Short Fiction, Norton & Co., 1986. William James used the idea of such a flawed Utopia in The Moral Philosopher and the Moral Life, where he looked back to Dostoyevsky's The Brothers Karamazov and the Biblical scapegoat for precedents.
 Such a reason could be 'Humans need some outlet for their bloodlust'. The validity of such a reason rests on empirical facts about humans, which I will come to shortly.
 Here I follow Derek Parfit (Reasons and Persons, Oxford University Press, 1984) in holding that the realization of present interests is the only rational driver of prudential action. Alternatives, such as 'My rational ultimate aim is that my life go as well for me as possible' are analysed in detail by Parfit, and to my mind, he shows conclusively that they are self-defeating.
 J S Mill used such a hierarchical framework of value in his brand of utilitarianism.
 Clearly there are some interests that are abiding, such as the interest in having enough to eat. Perhaps there are interests that are both abiding and very high level, such as 'Autonomy'. The argument for such an interest as supreme and non-negotiable would come by deduction from premises about rationality (as in Kant) rather than by observation of people's actual behaviour, which displays inconstancy. Because I am trying to base my assertions about humans on what is observed, I take the indeterminacy of interests as more fundamental than any statement of what interests are implied by virtue of the possession of rationality.
 I'm happy to have got this far without putting a value on personal agency. But I can't let this paragraph pass without making some comment about traditional moral theory. In principle, the Utilitarian cannot object to 'interest control' if, by manipulating individual's interests, the state can ensure those interests are fulfilled, and that each person thereby maximises their perceived utility. If we feel any hesitation about the strategy of controlling a person's interests (even if it makes them much happier), then we are revealing a Kantian sensibility. Personally, I claim to have a Humean view of ethics, but I find external interest control repugnant, so I guess I have Kant in the closet.
 W. L. Shirer, The Rise and Fall of the Third Reich, Simon and Schuster, 1959, p 56.
 Such as that of Robert Nozick in Anarchy, State and Utopia, Basic Books, 1974.
 Arguably there is a third religious reason to act morally: gratitude. This is emphasized particularly in Christianity, where the Biblical writer Paul, and subsequently Augustine, Luther and the Protestants, see it as consequent on true belief. It also appears in some forms of Buddhism. In this model the moral action of a human is never good enough for God. However much sin there is in the human's life, it is enough to separate them from God. Thus, trying to earn eternal life according to the first religious model is futile. But in Christ, God reconciled us to himself. Jesus gave himself (an act of love) as an atoning sacrifice (to pay the price of the sins of the world demanded by justice). By the free gift of his son, God restored us to fellowship with him. In this model, forgiveness for wrong comes associated with cost to the Saviour, and appropriation of that forgiveness is by faith (belief/trust) in him, not by moral acts. Moral acts ensue, but now as an expression of gratitude to God, not out of fear or doubt about eternal destiny. Having said this, following conversion, the quality of relationship with God is contingent on moral behaviour just as in the second religious model, so I believe this third alternative is subsumed in the second so far as reasons to act morally are concerned.
 I use the cliché 'unthinkable' as shorthand for: The ideal state must play a role in limiting the chances for global catastrophe. Assuming that it is not itself the possessor of weapons of mass destruction, the best it can do is to offer its resources to the protection of the whole world against these weapons. Part of this effort will be responding to the posture of states for whom their use is not unthinkable. If there ever were a case for Utopian diplomacy, this is it.
 It could be argued that this will make all possible designs unstable -- after all, we can imagine any state being invaded by a virulent disease which kills everyone -- but clearly a state that takes steps to protect itself against disaster will be more stable than one that does nothing.