All posts by anomalogue

Speculative fiction

Michelle Alexander‘s research on racial bribes gave me an idea for a story.

In an effort to protect their superior status and economic position, the planters shifted their strategy for maintaining dominance. They abandoned their heavy reliance on indentured servants in favor of the importation of more black slaves. Instead of importing English-speaking slaves from the West Indies, who were more likely to be familiar with European language and culture, many more slaves were shipped directly from Africa. These slaves would be far easier to control and far less likely to form alliances with poor whites.

Fearful that such measures might not be sufficient to protect their interests, the planter class took an additional precautionary step, a step that would later come to be known as a “racial bribe.” Deliberately and strategically, the planter class extended special privileges to poor whites in an effort to drive a wedge between them and black slaves. White settlers were allowed greater access to Native American lands, white servants were allowed to police slaves through slave patrols and militias, and barriers were created so that free labor would not be placed in competition with slave labor. These measures effectively eliminated the risk of future alliances between black slaves and poor whites. Poor whites suddenly had a direct, personal stake in the existence of a race-based system of slavery. Their own plight had not improved by much, but at least they were not slaves. Once the planter elite split the labor force, poor whites responded to the logic of their situation and sought ways to expand their racially privileged position.

It struck me me that a strategy like this one, could be used in any number of political situations, including the one we are in today.

This inspired me to outline a speculative fiction novel set in contemporary times, where this strategy of social control is deployed by a fictional technocratic overclass.

The protagonist belongs to a this overclass. It had control over much of the world’s wealth and over nearly all public and private institutions. Though few outside of the overclass fully understand it, the interests of the overclass are at odds — (very sharply at odds, as we will see) — with the interests of a much larger underclass.

The underclass is highly stratified and factionalized, and preoccupied with its own internal conflicts, animosity, suspicions, resentments, prejudices, etc. They clash incessantly with one other, and are locked in mutual hatred.

The overclass, on the other hand, is both rare and carefully sequestered from the rest of society. The overclass, therefore, exists mostly as an abstraction, while the factions of the underclass are very much real to one another, to the exclusion most other concerns. And the overclass does what it can to keep the conflicts going, in order to keep the underclass occupied and divided

Being a relatively small class with divergent collective interests from the majority is always perilous. The overclass is highly educated, and is therefore acutely aware that, even in the most favorable circumstance, if overclass hegemony were effectively questioned and challenged, it could be overthrown.

But in a democratic society, conditions are unfavorable. If the underclasses were to achieve any self-consciousness and solidarity, it would vote the overclass out of dominance, especially if it became aware of the overclass’s long-term projections and class preferences.

If the underclass were to resist, the overclass would have no legitimate way to exercise the kind of coercive measures traditionally used to contain such rebellions. The overclass is still — (at least in the near future) — dependent on the voluntary or at least docile involuntary cooperation of the underclasses.

But — (as we will learn) — this dependence is nearing its end. Things are about to change drastically, and this is the precise point of the class conflict : Technological advances will, within a couple of decades, turn the tables on the underclass. Robotics will render the manual labor of the lower-underclass obsolete. And shortly after, artificial intelligence will render the cranial labor of the upper-underclass obsolete as well.

This is fortunate, at least to the overclass, because human life on Earth is growing unsustainable at current population levels. There are too few resources, being consumed too rapidly by people having too great an impact on the environment. If fewer people are needed to sustain the quality of life for the overclass, the population can be safely decreased to a sustainable or even optimal level.

So, dependence on large masses of underclass people who can outvote or overthrow the overclass is a temporary condition. But, this is only the case if trends continue, uninterrupted,  and that is far from guaranteed. Most troublingly, as the inflection point nears, the writing on the wall — the hidden-in-plain-sight endgame of the overclass — becomes increasingly legible to those who will certainly revolt if they read it.

This creates is an urgent need to drive even wider divisions, invent more dramatic and entertaining distractions and find new justifications and means for controlling underclass behaviors — all without disturbing progress toward those conditions where the underclass can be largely obsoleted and irreversibly dominated.

So, I’m imagining the overclass might use racial bribes, or identity bribes as one of several tactics for dividing the underclasses at multiple subclass fault-lines. It might also set the upper and lower underclasses against each other over cultural issues, giving special prestige and privileges to the upper-underclass and allow them to humiliate and enflame the lower-underclass. Politics will become a never-ending circus and obsession, while technology continues to progress toward the power inflection.

I’m not sure how to end it. I have a couple of possible endings.

The most tragic, perhaps sadistic, Orwellian ending is one where an attempt to stop the overclass fails. The upper-underclass, the “professional class” itself stops it. Then they discover that they are now the underclass, utterly and hopelessly dominated by a thick layer of AI and robotic management.

Or we could go the Huxley route, and have the overclass succeed, leaving a drugged, docile but content underclass skeleton crew to do whatever AI and robotics cannot, which tends to be precisely the interesting part of the work. Generations of couples opting into permanent birth control (and enjoying generous UBI bonuses) have decreased the population to an environmentally sustainable level, and these have been selected for STEM abilities. They are given intensive training to prepare them for careers in various kinds of technology innovation still beyond the ability of AI, for instance, engineering AI that can innovate beyond all human ability.

Or maybe I’ll have the plan backfire, and have the controlled fire set in the underclasses explode into an uncontrollable inferno that takes down all of civilization, maybe starting gruesomely with the overclass.

Or maybe I’ll have a foreign power, under the control of several members of the overclass — perhaps rulers of an already undemocratic nation — exploit the manufactured chaos and distraction of a still-democratic nation in the midst of an upheaval to stage an old-school military invasion. One overclass faction ends up dominated or destroyed by another. That might be interesting.

Or maybe I’ll end it with an underclass awakening, resulting in a perfectly peaceful neutralization of the overclass by democratic means.

Maybe this would be the best ending: I will have the hero wake up from dreams within dreams, each a different one of the scenarios described above. I could publish five editions with different waking orders, that leave off with the hero about to wake up again into who knows what history. I could have the versions shuffled so there is no official ending.

Please do not decenter yourself

Decentering one’s self or one’s identity as a response to one’s former egocentrism or ethnocentrism is just this year’s model of altruism.

Altruism is benevolence modeled on a stunted vision of individualism, which it tries to overcome by simply inverting it: Selfish people care about themselves at the expense of others, so unselfish people care about others at the expense of self.


We’ve experienced the consequences of the altruistic ideal in design.

In the late 1990s and 2000s user experience design (UX) set out to help organizations stop being org-centric, and instead to be user-centric or customer-centric. Organizations who listened to us invested in research to find out what their customers wanted them to be and tried to become that. And everyone got told approximately the same thing, so wherever UX did its thing organizations started looking expertly, unobjectionably, and blandly alike. There was nothing wrong with the solutions — UX had seen to it that all flaws were removed — but there was nothing spectacularly right, either.

The solutions were well-informed, but poorly-inspired.


I want to argue that an impoverished understanding of personhood is at the root of the uninspired, uninspiring, unobjectionable but bland products of UX.

But I want to take it further and claim something more general and consequential:

The impoverished understanding of personhood that belongs to altruistic ethics is responsible for uninspired, uninspiring, unobjectionable relationships incapable of sustaining personhood.

The high divorce rate, the empty depravity of hookups, the shallowness and fragility of friendship, the feeling of victimization and oppression that motivates so many young activists to hunt down and punish whoever is responsible for one’s own bad experience of life (and, it appears, one’s own self-contempt) — all these are caused by a theory and practice of personhood that can only produce empty relationships and selfless, decentered alienation.

This nothingness at the center where somethingness ought to be — nihility — is not neutral or numb or Buddhistically empty or void. On the contrary, nihility torments, aches, rings, glares, and stinks in its absence.

Our last two generations were aggressively indoctrinated in this decentering, altruistic ethic of goodness. They, in turn, are replicating it everywhere they can, motivated by intense resentment toward a world that has put them in this state. Or, as they prefer to put it, out of a selfless love for the oppressed, with whom they identify.


I would propose as a replacement for this altruistic vision of personhood — self as an inert subjective object, a discrete, body-sized soul which moves around in space expending its limited resources caring for its own self or caring for other selves — with something radically different. I propose that we see selves as radiant centers, comprising smaller radiant centers, and contributing to larger radiant centers. A person is one unit of such centers, possessing a sort of center of gravity, based on the dynamic arrangement of centers at any given moment. Each of these centers, being radiant, extend in their being outward. In other words, they exist. Ex- “out” + -ist “be”.

But these radiant centers, with their own personal “I” center of gravity, also constitute larger units of being, and these larger units are relationships. These “we” relationships change the dynamics within the person, and brings that person to exist within the larger being, as a member of the relationship as well as remaining a person.


The meaning of love changes with this reconception of personhood. Love is not so much a love of the other as other, but as a fellow participant in a relationship that is the next scale upward of oneself, a larger self in whom one is a participant — a participant in something real and transcendent to self, within whom one subsists as oneself. In the altruist conception of love the other is the object of one’s love, “I”, the subject, love “you”, the direct object of the love. In this new understanding of love, love is an enclosing subject within which love happens between persons.


The meaning of empathy — another favorite word among altruists — changes with this reconception of personhood.

Empathy becomes a participant’s direct perception of the state of the larger beings of which it is a part. It is felt via the shifting dynamics of radiant centers within oneself in response to the changes of surrounding transcendent centers. Empathy is felt, but we lack language for its experiences — we have no “red”, “yellow”, “green” or “blue” — and so the speaking part of our mind (the only part of our mind the speaking mind acknowledges) refuses to acknowledge the reality known to empathy. Empathy is a sixth sense — a mode of perception, experienced with immediacy — and it is undeniably real to those who accept its reality.

Empathy is the furthest thing from intensely imagined feelings of others, or the reaction one has to stories of other people’s suffering. What is called empathy today is more often just emotions generated by vivid imaginations, the virtue of avid readers of sentimental fiction.


A lot of pretty-sounding Jewish and Christian platitudes make very new sense when heard with ears that hear this way. In marriage we become one in flesh. We are to love our neighbors as ourselves. The Christian idea that the Church is the bride of Christ. I even recognize this vision in Alain de Lille’s formula: “God is an intelligible sphere, whose center is everywhere and whose circumference is nowhere.” Seen this way, each center is one of infinite sparks of God, who approaches God by participating in ever-expanding nested scales of radiant being.

So, forget altruism. We do not give ourselves up in order to have the other. We give up the limits of our discrete, body-sized soul in order to participate in larger and larger personhood, and it is this alone that makes life lovable, because this participation as persons, in new, larger persons is what love is.


By this logic:

  • We should stop telling our children “You are not the center of the universe.” Instead tell them “You are not the only center of the universe.”
  • We should stop doing user-centered or customer-centered, or any-centered design. A better ideal is polycentric design.
  • We should not decenter ourselves. We need our centers! We should polycenter ourselves.
  • We should absolutely not tell other people to decenter themselves. We may invite them to polycenter — as I am doing here, now.
  • We should not express love as “Not for me, but for you, alone.” We should instead express it as “Not for me, but for us, together.”
  • We should not treat gifts as transfers of ownership from giver to receiver. We should instead see gifts as investments what did belong to me, alone, now belongs to you as a member of we.
  • An identity should not be viewed as a classification that makes one person the same as another. Identity is identifying oneself as a member of a group, within whom one subsists. Identity is something we do, and it is not something another person can do to another — at least not respectfully.
  • We are never more self-centered in our small body-sized self than when we refuse to hear another perspective offered to us in good faith, but instead cling to our own omniscient already-knowing. Clinging to altruism does not make us less selfish, it destroys our selves, the very being who can love and be loved.

Following a reconception

Conceptions are holistic. They exist in potential as wholes. But conceptions manifest serially.

Conceptions are acquired by following a series of mental movements, and the very moving is the knowing. Conceptions are demonstrated by the one conveying them, and imitated by the one receiving them.


Follow what I’m saying and see if you pick up what I mean.

First, step by step, recited, slowly.

Then, step by step, remembered, faster.

Then step by step verbally guided, but fluidly.

Then performed fluidly, silently, intuitively.

Then performed as habit, done effortlessly.

Then done automatically, second-naturally.

Finally, forgotten into reality itself, a part of oneself and one’s world: enworldment.


Movements of conception can be embodied in dances, languages, arguments, stories, verses, visual sequences, melodies, rituals, customs, practices, collages.

Anything that can be followed, then reproduced, has within it a conception that makes it knowable.


Philosophy has been compared to dance.

Philosophies are designed, but not like machines made of parts.

Philosophies are designed like choreographies for different genres of improvised music.


Every reconception is a conception.


Repetition of conceptions

Quoted in Gabriele Tarde’s Laws of Repetition: “Scientific knowledge need not necessarily take its starting-point from the most minute hypothetical and unknown things. It begins wherever matter forms units of a like order which can be compared with and measured by one another, and wherever such units combine as units of a higher order and thus serve in themselves as a standard of comparison for the latter” (Von Naegeli. Address at the congress of German naturalists in 1877).

This is incredibly helpful for my own thinking. When we take some understanding that helped us make sense of X, and use it to make sense of Y, what exactly is repeated that makes it the same understanding or idea or conception?

To get very specific and concrete, when I first began to understand Nietzsche the conceptions I learned, which I found nearly impossible to articulate explicitly, helped me re-understand a great number of previously unrelated questions, confusions and mysteries that (at least prior to the understanding) seemed unrelated to the material Nietzsche was presenting. Reconceptions burst forth from nowhere and rippled through my memories, changing them. I knew it even prior to recollection. I could feel the change in my soul with an intuitive immediacy that defied language, but which could be used almost effortlessly. The changes even altered my perceptions of music, poetry and the world.

To me, it seemed like I’d just read something exciting that inspired new ideas. I would be forced to put the book down and think, talk or write. But sometimes these inspirations were related to problems Nietzsche had sketched out earlier in the book but left suspended, painfully unresolved. Often, the next day, when my inspiration subsided enough to permit further reading, I would read one of my own thoughts, printed out on the page. Nietzsche had implanted a thought in my head.

One other peculiar effect of these reconceptions was learning how many of my explosive new thoughts were rediscoveries of commonplace insights that I thought I understood well enough, but rejected as truisms, cliches or platitudes or nonsense. Think again. Things I’d heard recited myriad times suddenly had intense meaning, and I was the first to discover what was hidden in plain sight. It took dozens of humiliations to realize I was not the first to unlock the deeper significance of these words. In fact, I was the last. This insight was new only to me. But they could be known only through this strange kind of explosive, renewing reconception.

So, again: What was conveyed in this learning that resulted in new understanding? What was rippling through my psyche? What was I using to make new sense of memories and new experiences?

For now, let’s call these mysterious, indirectly known entities conceptions.

Why were these conceptions so easy to use, but so hard to talk about, much less encapsulate with explicit language?

How do we discover or invent — or instaurate — new conceptions? (Or, more often, rediscover, reinvent, reinstaurate conceptions that are new to us?)

What if all our understandings are just the workings of conceptions? And what if our overall understanding of everything in total is just interrelated conceptions working in concert, perhaps related and coordinated by yet other conceptions?

If we change our conceptions, what impact can this have on our most basic understandings of personhood and of the very nature of truth and reality?

When can we change our conceptions? When should we change them? When should we preserve or protect them?

And, finally, how do we decide together as a society which conceptions we ought to adopt and use in our lives together? Consider the complicating factor that it is up to our existing conceptions to make decisions about what conceptions should be changed to preserved…

Neologisms vs unusual uses of familiar words

“In fact, when a philosopher needs a word to express a new generalisation, he must choose between two things; he must choose a neologism, if he is put to it, or he must decide, and this is unquestionably better, to stretch the meaning of some old term.” — Gabriel Tarde

If I didn’t already own Laws of Imitation, I’d have to buy it just to have this quote in physical form. I face this dilemma all the time when trying to say something new. Both objections, to neologisms and to unusual uses of familiar terms, are standard features of philosophical defense mechanisms, whose purpose is to disqualify speech capable of undermining its conceptions.

Reconceiving conceptions, part 1

A note on word choice: I am experimenting with using the word “conception” in place of “concept”. A conception is a conceiving move that produces a concept. A concept can be one of any number of artifacts, all of which can be viewed as alike in that they are produced and reproduced (comprehended) by the same conception.


If you think about it — and few of us do — thinking is an extremely mysterious activity.

Thinking is never more mysterious than at the edges of intelligibility, where, in order to think with any coherence, clarity or conviction, a thinker must first find new ways to make clear unified sense of material that is fragmentary, murky and perplexing. These new ways of making coherent sense are conceptions.

When one lacks conceptions needed for thinking, conceptions stand starkly absent. It is similar to how we suddenly become hyper-aware of our reliance on a humble body part, like a little toe, once it is injured or stops functioning, or how much we use a utility when service is interrupted, and we keep mindlessly flipping on light-switches even though the electricity is out.

It is when conceptions and thinking breaks down that we think about the activity thinking and experience how mysterious it is.

For normal people, the experience of grappling with inconceivability is relatively rare. Most things make sense most of the time — or at least most relevant things make sense. Of course, many things remain incomprehensible, inexplicable, irrational, confusing, frustrating, chaotic, crazy or mysterious — but these things tend to be pushed out to the margins. They are labeled “irrelevant” and ignored. Or they are labeled as “evil” or “delusional” and condemned or despised. Or they may be labeled “mysteries” and placed beyond human comprehension, for wonder, contemplation or worship. Generally, nothing short of catastrophe or crisis is sufficient to motivate a person to reconceive and understand something that defies comprehension.

Normally, normal people rely almost exclusively on ready-made conceptions to produce whatever thoughts they think, and to form whatever beliefs they hold. Infinitesimally few beliefs are produced by thinking. Nearly all beliefs are conceived automatically, in perception. Most conception occurs prior to thought, habitually and invisibly, in the continuous act of perception, where conceptions intercept and conceptually format sensations prior to any conscious thinking. When perceptions cohere autonomously in a form that lends itself to effortless intelligibility — self-evident truth — truth and reality are indistinguishable. This state of mind is called “naive realism.”

Is naive realism bad? Many will insist “yes” but this judgment is itself the product of conception — perhaps, ironically, a habitual and unconsidered conception of precisely the kind it disparages.

Naive realism can also be conceived as an ideal. This is what I intend to argue, and I intend to argue it from a highly abnormal angle: that of a design strategist.


I mentioned that normal people normally do not think about thinking nor the conceptions they have at their disposal for perceiving and conceiving truth, and I referred to design strategists as abnormal in this respect.

Design strategists are forced to think about thinking, conceptions, perceptions all the time. A total breakdown of thought and attempts to resolve the breakdown and resume thought is just part of the work.

This is because design strategists are crisis agents. We are primarily hired to resolve crises, or to create crises in order to help organizations innovate, differentiate or disrupt their industries and throw their competitors into crisis, all for the sake of gaining competitive advantage.

Design strategists are professional crisis mongers. The most important component of such crisis mongering is design research, and the ideal outcome of design research is what I call “precision inspiration”.

Explaining strategic design research and precision inspiration provides context for understanding why strategic design demands thinking about thinking.


The best way to explain design research is pragmatically, presenting it in terms of what it does. And since design research was formed in the crucible of business, let’s discuss what it does in terms of benefits, using the preferred genre of the business world, the sales pitch.

What are the benefits of design research?

First, and most obviously, design research informs decisions. It helps organizations identify opportunities for improvement. It helps them understand precisely what can and should be improved, why that improvement will matter to people and how the improvement ought to be made so that efforts to improve things have their intended effect. And these improvements are not only for customers, but for all people involved in the organization — customers, employees, partners, leaders, investors and any other kind of stakeholder. Design research helps organizations “design the right thing, and to design the thing right”. Research improves the product of an organization.

Second, design research looks at opportunities through the lens of an organization’s capabilities, and especially those capabilities unique to the organization and therefore potentially differentiating. The improvements found are improvements only this organization is able to provide. Research differentiates the product of an organization. The product is not just better — it is uniquely better, and this organization is the only one able to provide it.

These first two benefits supply the “precision” part of precision inspiration. They focus effort on a sharply-defined problematic region, where potential value is most concentrated.

Third, design research provides persuasive evidence that helps leaders align organizations around particular projects. If everyone in an organization is persuaded that a project is worthwhile, energy otherwise wasted arguing for following divergent paths — or even taking those paths and working at cross-purposes — is applied forcefully in a single direction. Morale-sapping doubts are answered, freeing participants to invest energy into the project, optimistic that their efforts will bear fruit. Design research helps organizations align and improves efficiency and effectiveness of production.

Fourth, design research also drastically improves team dynamics and helps them collaborate more effectively and enjoyably. By introducing the scientific method into design processes, it brings enlightenment values to the notoriously authoritarian milieu of the workplace. Instead of uninformed speculations and untested intuitions (the products of private imaginations, prejudices, preconceptions and biases) competing to prove that it possesses esoteric insights into the souls of The User or The Customer and therefore has the answer on what solution to build, everyone is free (or freer) to propose questions to ask and hypotheses to test with real people, in order to assess the degree of validity in everyones’ ideas and hunches. The stakes are lower and cheaper, so democratic participation is more affordable. And the output of the research typically partially validates multiple views in ways requiring new combinations. So ingenuity is contributed from more sources and woven together ingeniously by yet others, and ultimately the idea can only be said to originate in the entire team working together on a shared problem. Research improves the experience of production, which lays the political groundwork for the climax of this pitch, the inspiration part.

The inspiration of design research comes from how it can helps us reconceive what we are doing, how we are doing it and why it matters. This is important, because our repertoire of conceptions enable and constrain what we think, believe, imagine, invent. They also shape our perceptions and help us ask clear questions. The limits of our conceptions are the limits of our minds, and the limits our capacity to take intelligent action. In the most productive research, new conceptions are learned directly from participants in the research, in the process of understanding their worldviews. Yet more conceptions must be found/made (or instaurated) to make sense of the full range of conceptions learned and to link them to the conceptual tools of the various disciplines collaborating on a solution. This can rarely be done with the available stock of existing conceptions, so in effect each team is forced to create a new conception-system — a small, local philosophy tailored to the project — that makes the problem intelligible and soluble.

This is an arduous, perplexing and anxious process. Not all people have the intellectual flexibility, faith and fortitude to do it. But when it is done successfully, new conceptions cause novel possibilities pop into existence, ex nihilo — possibilities were literally inconceivable before. This sudden influx of possibilities and outpouring of novel ideas — even new goals, purposes, values — resulting from the acquisition of new conceptions is, in fact, precisely what inspiration is.

The novel ideas produced by research are far less obvious and far more relevant (because they were acquired through precise understanding of specific people and and specific organizations) than ideas produced by the general truisms of industry conventional wisdom. Because industry conventional wisdom processes the same old facts the same old way, produces nothing but the same old same old, same-old: safe, stale, predictable, undifferentiated ideas.

This new, previously inconceivable way of conceiving precisely what this organization can do for precisely these people the organization exists to serve, conceived in a way that makes this problem thinkable in a shared way for all people involved in the effort and aligns them in solving it is precision inspiration.

Deep, rigorous, courageous research is the most effective and reliable way to induce such precision inspiration.

Doing research in this way, day in, day out, year in, year out changes one’s conceptions of conceptions and forces us to rethink how thinking works. A life of producing myriad small, specialized philosophies for specific problems eventually produces a comprehensive general philosophy that expands far beyond the limits of business, or any compartmented life activity and changes one’s view of everything.

In other words, it becomes a fundamental philosophy: a philosophy of design of philosophy.


To be continued… Design should be invisible, and so should be our conceptions!

John Dewey on class supremacy

From Human Nature and Conduct:

We are forced therefore to consider the nature and origin of that control of human nature with which morals has been occupied. And the fact which is forced upon us when we raise this question is the existence of classes. Control has been vested in an oligarchy. Indifference to regulation has grown in the gap which separates the ruled from the rulers. Parents, priests, chiefs, social censors have supplied aims, aims which were foreign to those upon whom they were imposed, to the young, laymen, ordinary folk; a few have given and administered rule, and the mass have in a passable fashion and with reluctance obeyed. Everybody knows that good children are those who make as little trouble as possible for their elders, and since most of them cause a good deal of annoyance they must be naughty by nature. Generally speaking, good people have been those who did what they were told to do, and lack of eager compliance is a sign of something wrong in their nature.

But no matter how much men in authority have turned moral rules into an agency of class supremacy, any theory which attributes the origin of rule to deliberate design is false. To take advantage of conditions after they have come into existence is one thing; to create them for the sake of an advantage to accrue is quite another thing. We must go back to the bare fact of social division into superior and inferior.

Four characteristics of a “religious” philosophical conversion

Not all philosophical shifts have the character of a religious conversion. However, I do think religious conversions are essentially philosophical shifts with several distinctive features:

  1. The new philosophy replaces one’s most fundamental unifying concepts and consequently effects a near-total transfiguration of experience.
  2. The shift creates a life experience that is not only useful (that is, helps people function more effectively) and usable (that is, helps people think and communicate more clearly and coherently), but intensely desirable. A feeling of value floods into the world.
  3. The new philosophy is oriented toward a mind-transcendent reality. In other words it points beyond what is experienced and known to what can potentially be experienced and known.

A fourth characteristic is highly desirable, but not necessary: Ideally, it fosters solidarity with other people and supports a community of faith.

And yes: I am arguing all the other features commonly associated with religion are nonessential — just accidents of the history of concepts.


Ipseic/alteric moralities

I have been re-re-re-re-re-reading Daybreak. Having taken such a long break from reading Nietzsche, but meanwhile having carried his concepts — thoroughly and permanently internalized, but largely inarticulate — out into my reading of other thinkers and into my professional design practice and personal life — coming back and reading him again is revelatory.

One thing that is standing out sharply is a theme of ipseity and alterity: what is of myself, and what is of other?

(Metaphysical digression: I believe this is where I originally picked up my habitual (re)mapping of immanence and transcendence. If I am not mistaken, most religious people, when speaking of immanence and transcendence, map the terms objectively: the world of mundane objects in space is immanent, and this mundane reality is a manifestation of a divine realm which is transcendent. My view of transcendence is subjective — and more precisely phenomenological. Immanence is relative to self: it is the world as I know it, both tacitly and explicitly. An elegant way to express this idea is that, for any self, immanence is one’s own pragmatic meaning of the word “everything” — all that follows from one’s belief in the existence of “everything”, as one means it. Transcendence is what is real beyond one’s own conception of everything, and it generally makes itself known through novel immanence, aka surprise, leaving one apprehensively aware that reality is far more than what one experiences and knows of it. It is this subjective conception of transcendence that opens religion to me. If objective transcendence were the only option, I’d be an atheist, and until I found this alternative metaphysical mapping, I was an atheist. Fortunately, even traditional religions are entirely receptive to this understanding, and so I can participate in religious life and worship as a member of a community.)

The rest of this post sort of fell apart, but I want to post it anyway…

Continue reading Ipseic/alteric moralities

Three hard truths about forgiveness

It is a hard truth that we can never really heal from the trauma of conflict without forgiveness.

It is an even harder truth that real forgiveness, the kind that actually heals, is something actual that happens between real people, who must willingly exchange forgiveness in order to actualize it.

Of course, some people will dispute this, and insist that you can, in fact, forgive someone else inwardly, in your own mind or soul. But this is not forgiveness. A better term for it would be “reconciliation”. Such reconciliation is not with the other real person, but rather with an image of them that belongs to you, alone, and occurs entirely within your own mind. You reconcile yourself to that person being dead to you, and you overwrite your memory of them with a eulogy. That is the furthest thing from forgiveness. It is almost vengeful.

The third hard truth of forgiveness is the hardest truth: Those with whom we most need to exchange forgiveness are often the very ones who never recognized our reality in the first place. We were always semi-fictional characters in their autobiographies, playing the role they assigned us. When our character creates conflict in the story (perhaps for insisting to much on being a real person), the author kills it off and then unilaterally reconciles with it. The trauma is written into their story, and belongs to them alone. This is life as they live and write it.

These are the ones who need forgiveness for an unwillingness or incapacity to exchange forgiveness.

Love and self-respect

At the cusp of adulthood, in the summer of 1990, I became aware that I had two modes of esteem and identification, which I labeled “what I love” and “what I approve of”.

I decided at that point in my life to embrace and identify with what I approved of and to distance myself from what I loved.

This choice might seems strange by today’s standards, but I will argue that this was a necessary and wise decision.


In the autumn of 1990, my friend Rob handed me a slip of paper upon which he’d typed a Rilke quote “A merging of two people is an impossibility; and where it seems to exist, it is a hemming-in, a mutual consent that robs one party or both parties of their fullest freedom and development. But once the realization is accepted that even between the closest human beings infinite distances continue to exist, a wonderful living side by side can grow up, if they succeed in loving the distance between them which makes it possible for each to see each other whole against the sky.” I feel sure that this passage completed and sealed my choice.

I believe that taking this path allowed me 1) to cultivate a self-respectful (approved of) selfhood, and 2)to gain the distance needed to love someone else precisely for her otherness. “What is love but understanding and rejoicing at the fact that another lives, feels and acts in a way different from and opposite to ours?”


A capacity to love that which one finds compelling, admirable, but profoundly alien is a key virtue supporting living toward transcendence.

A capacity to form self-respectful collaborations with likeminded souls is also a key virtue in transcendent becoming — growing beyond one’s limits.

And, the wisdom of discerning selfhood and otherhood, and forming appropriate relations with each is necessary to avoid hating what you love and loving what you hate.


Today I am speculating on what might have happened, had I had made the opposite choice.

What if I had chosen to identify with “what I love”, and distanced myself from “what I approve of”.

Earlier, I mentioned that my decision probably looks pretty odd from the standpoint of 2021. Isn’t approval a cold, rationalistic standard? Shouldn’t we love ourselves, rather than just approve of ourselves?

But consider the consequences: If one identifies with what one loves (and what one loves most is one’s transcendent complement, what one is not) one tries to become precisely what is least possible to be. Failure is inevitable, and when it happens, there is a real risk that one will envy and resent those who succeeded  —  again, precisely those who are most transcendently complementary, those whom one could best love across distance as other.

And when one invests all of one’s time and energy pursuing an impossible ideal, this diverts time and energy away from the development of one’s own real potential. One’s real possibilities are neglected, and the self is left in an undeveloped state incapable of inspiring self-respect. As a substitute one authors a persona or adopts an identity and uses that as a substitute for selfhood. But this is a thin deception. The assertion of one’s persona or identity is a head-splitting whistling in the dark that barely masks the even louder shame and self-loathing looping beneath. Everything that threatens the illusion is viscerally painful and excites hostility.

Unfortunately, this speculation is not purely speculative — but, in fact, informed by observations of people I know and have known, and many others I’ve listened to from a distance.

And I am worried, because I suspect that this peculiarly selfless, but also otherless, state of mind might in fact be psychologically common, or even predominant in the last two generations. The strange, hyper-intense, symbolic politics of our age might be the projection of this inner hell onto the outer world.

Reconceiving concept

Concept. Con- + -cept. Together-take.

A concept takes together a multiplicity as a unit.

Concepts do not have form; concepts give form.

It is not possible to give an example of a concept. Concepts can only be demonstrated.

Most of what we say about concepts, and the way we use the term “concept” is pure category mistake, ontological confusion. We misunderstand the kind of thing a concept is, and the practical consequences proceeding from this misunderstanding generates profuse unintelligibility.

How do we acquire a concept? We follow what it does. We follow an argument, an analogy, a story, a pattern, a system, until we pick it up, and reproduce it in ourselves. We follow along, and then we get it. We are initiated into the concept and start using it.

Really well-conceived concepts become habits, and are no longer guided by language or by intention. They guide language and participate in our intentions. They become imperceptible extensions of our personal being, reflected in our experience of reality.

Concepts are intellectual concavities, and this is one reason why we so often resort to spatial metaphors when speaking of concepts. We enter concepts, inhabit them, and look out from them, perceive from them, understand from them, experience from them, respond from them. Concepts are not convex objects that we can grasp. Concepts are that by which we grasp.

Concepts comprehend. Concepts are not comprehended, though truths are comprehended when a concept is received or conceived.

Do we conceive an idea? I would prefer a more finely-articulated account, that includes invisible, silent, but crucially important moral deeds: We face an incomprehensible situation. We try to comprehend it, despite the fact that we have no plan, principles or precedents to help us comprehend it. We enter the void of inconceivability; we undergo perplexity. “We do not know how to move around” in perplexity. We cannot even state the problem we are trying to solve or the question we need to ask, much less answer it. So we grope. We follow faint hunches. We try, fail, try, fail. We follow our noses and our guts. We cannot say what or who guides us, but we are guided, very subtly. If we keep our heads — if we refuse to turn around and flee back to old, familiar, inadequate concepts — if we stay alert to inaudibly quiet voices speaking in native languages of our most private personhood, we somehow conceive a way to think the inconceivable, and a concept is born. The concept then comprehends the situation and generates an idea. But our coarse, public words leap to “I had an idea.”

Concepts are conceived, not comprehended. But often when we acquire a concept we re-conceive it and become able to comprehend that by which the concept was demonstrated, we bolt right on past the demonstration and enjoy having an effusion of ideas of our own, that, suddenly, miraculously, erupt — having been made possible through this new concept.

When we are taught a concept, often we only credit the teacher teaching us the content of the demonstration. We credit ourselves for the outpouring of new ideas, inspired by this little nugget of truth. We are inspired, become creative, and revel in our new powers of insights and invention.

The modest nugget of truth that conveys a concept through demonstration, initiates a learner into new possibilities of thought inconceivable prior to the insight, and inspires myriad acts of creativity — could this be the philosopher’s stone?

Until we acquire a concept, all ideas comprehended by the concept are incomprehensible, or even more often they are misunderstood — that is, they are grasped using concepts that comprehend its content in a different and conflicting way. Even meaningful artifacts, whose meaning is known, felt or otherwise accessed by way of an alien concept, are opaque until the concept is acquired.

Well-conceived concepts form systems of cooperating concepts. They function together, harmonize together, corroborate and reinforce one another, combine to make coherent sense of things. Such concept systems make “things in the broadest possible sense of the term hang together in the broadest possible sense of the term.” Concept systems, which use concepts to select and connect other concepts, are philosophies.

As with simpler concepts, philosophies cannot be given directly. They are always demonstrated. When a philosophy is demonstrated, it is necessarily demonstrated using content, but what animates the demonstration — the movements of concept — is the real substance of the philosophy. When the concepts are received the content of the philosophy is comprehended, and, more often than not, confused for the philosophy.

I learned to conceive concepts this way from Nietzsche. I would read his arguments and aphorisms, puzzle over them, turn them this way and that, entertain them, fight them, connect them in various ways, and generally struggle to make coherent sense of what he was saying. He would reduce me to despair, which would cling to my entire lived experience for days and weeks. The unresolved perplexities would pile up and intensify. Then he would resolve one of the perplexities with a tiny crystalline insight. This little seed of a clue would instantly resolve the problem perfectly, then explode beyond the problem, resolving myriad known and unknown perplexities, so rapidly and comprehensively it was nearly impossible to keep track of the knowledge that suddenly was just existent, appearing ex nihilo. Even well-understood knowledge would be blasted apart, evaporated and reconstituted in new significance. And the change went beyond knowledge, too, into capacities for understanding. Truths that had been incomprehensible just seconds before were now perfectly obvious.. I found myself inventing completely new ideas, brilliant ideas, inspired by earlier aphorisms or images. …But then I would read on, and there it would be, typed out, verbatim: one of my original thoughts. Nietzsche was somehow inducing these original thoughts, then proving that it was intentional, in some inconceivable way.

Two problems arose from this experience. The first was the hardest. I found my reconstituted philosophy disturbingly resistant to language. I was unable to convey what I knew, and even the things I knew in this new way were misunderstood entirely by the people around me. And worse, when I would try to convey what I understood, it inflicted terrible anxiety,. People wanted to not know what I so badly needed to say, and it was excruciating. I was intellectually imprisoned. I called it “solitary confinement in plain sight” The loneliness was crushing. But the second problem became the kernel of a more mature philosophy that wanted to understand and articulate how Nietzsche was able to write this way, and what it meant about the human condition and reality itself.

Eventually, after many reconceptions, a few very deep transformative ones, and many smaller localized ones, I began to think of concepts and philosophies as inexhaustible levers for changing our fundamental experience of life, and for opening new possibilities for materially changing the world in ways that might be wiser than if we immediately leap to fixing what seems obviously broken in obvious ways. And then I realized: this is what we always do when we design.

There is a crucially important step that occurs in human centered design after user research and before detailed design where we attempt to make sense of what we learn and put it into a form conducive to shaping and motivating design work. Traditionally, it has been called concept, but the word “concept” normally denotes an artifact, an object, a prototype, a model. The process of getting to that concept is often hellish, and often in proportion to the depth of the research. Teams are gripped in anxiety. I realized design concepts have exactly the characteristics I listed above. The “concept” demonstrates a concept so team members can pick it up and use it to guide their design work.

To be continued…

Existential nullity

Blindness is not darkness — it burns our eyes with a dazzling glare of churning nothingness.

Losing our sense of smell doesn’t make smell go away. The world is pervaded with a maddening stench of burning rubber.

If we lose a limb, a phantom limb remains, and it aches and aches.

Same with souls: those who neglect their unique personhood and instead adopt an identity will have a glaring, stinking, aching, resentful nullity where a soul should be.

“Precision inspiration”

When people ask me what design research is, my favorite answer is “precision inspiration”.

I know this might seem slightly business romantic, but my meaning is exact, clear, concrete — even a bit technical.


I’ll start by explaining what research is pragmatically, in terms of what it does. And because I’m a business guy, I’ll explain what it does in terms of its benefits. In other words, I’ll start with a sales pitch.

First, design research helps inform decisions. It helps teams identify opportunities for improvements. It helps us understand what should be improved, why that improvement will matter to people and how the improvement ought to be made so that the work has its intended effect. Design research helps organizations “design the right thing, and to design the thing right.” Research improves the product.

Second, design research also provides persuasive evidence that helps leaders align organizations around particular projects. If everyone in an organization is persuaded that a project is worthwhile, energy otherwise wasted arguing for following divergent paths — or even taking those paths and working at cross-purposes — is applied forcefully in a single direction. And morale-sapping doubts about the project can be quelled, so participants can invest real energy into the project, in the expectation that their efforts will produce a positive outcome. Design research done well is organizational alignment magic. Research improves the efficiency of production.

Design research also drastically improves team dynamics and helps them collaborate more effectively and enjoyably. By introducing the scientific method into design processes, it brings enlightenment values to the notoriously authoritarian milieu of the workplace. Instead of uninformed speculations and untested intuitions (the products of private imaginations, prejudices, preconceptions and biases) competing to prove that it possesses esoteric insights into the souls of The User or The Customer and therefore has the answer on what solution to build, everyone is free (or freer) to propose questions to ask and hypotheses to test with real people, in order to assess the degree of validity in everyones’ ideas and hunches. The stakes are lower and cheaper, so democratic participation is more affordable. And the output of the research typically partially validates multiple views in ways requiring new combinations. So ingenuity is contributed from more sources and woven together ingeniously by yet others, and ultimately the idea can only be said to originate in the entire team working together on a shared problem. Research improves the experience of production, which gets us closer to the climax of my pitch, the inspiration part.

The inspiration of design research comes from how it can helps us reconceive what we are doing, how we are doing it and why it matters. This is important, because our repertoire of concepts enable and constrain what we think, believe, imagine, invent. They also shape our perceptions and help us ask clear questions. The limits of our conceptions are the limits of our minds, and our ability to take intelligent action. In the most productive research, new concepts are learned directly from participants in the research, in the process of understanding their worldviews. Yet more concepts must be found/made (or instaurated) to make sense of the full range of concepts learned and link them to the conceptual tools of the various disciplines collaborating on a solution. This can rarely be done with the available stock of existing concepts, so in effect each team are forced to create a new concept system — a small, local philosophy tailored to the project — that makes the problem intelligible and soluble.

This is an arduous, perplexing and anxious process. Not all people have the intellectual flexibility, faith and fortitude to do it. But when it is done successfully, new possibilities pop into existence, ex nihilo, that were literally inconceivable before. This sudden influx of possibilities and outpouring of novel ideas resulting from the acquisition of new concepts is in fact what inspiration is.

The novel ideas produced by research are far less obvious and far more relevant (because they were acquired through understanding users or customers) than ideas produced by industry conventional wisdom that, because it processes the same old facts the same old way, produces nothing but the same old same-old, safe, stale, predictable, undifferentiated ideas.

Deep, rigorous, courageous research is the most effective and reliable way to induce such precision inspiration.


Meditation on “Microsoft Re-Designs the iPod Packaging”

Apparently I am waxing careericidal once again, because I just posted this on my company’s slack:


A meditation on the classic “Microsoft Re-designs the iPod Packaging” video.

This video pretty uncannily represents how most design used to be, back in the days before design research, when everybody in the room wanted to be the one who “knows The User” and “knows what The User wants”.

And that smarted, most insightful person, by total coincidence, always turned out to be the most powerful person in the room. Huh.

Then design research came along, and it changed team politics and collaborative dynamics completely. Design practice liberalizes business.

This same thing, by the way, is exactly what happened at the inception of the Enlightenment with the Scientific Revolution. This is how liberalism always happens. (and no, of course not — it did not happen perfectly, out of the gate, but nothing happens perfectly right away, except in the minds of naive fantasists, or the rhetoric of cynics.)

Here’s how I explain my job to myself so my work feels important enough to take seriously: We designers are in effect bringing a social scientific revolution to the notoriously illiberal, authoritarian world of business. This is why design research is the greatest thing ever.

Sometimes it is useful to have the opportunity to remember what it was like back in the bad old days of intuition wars and might-makes-right design.

Entertaining ontology designing

Follow up email to Nick on ontological designing:

Ok, I’m starting to like this paper, and I’m re-considering my initial resistance to situating myself within this school of thought. Her third sphere of ontological designing, “ontological designing of systems of thought, of habits of mind,” is exactly what I am proposing, and I do accept all her emphasis on coevolution (“While we as humans design buildings, they also design us.”) as true and relevant. 

I think the difference between my view and Willis’s is I believe that it is our personal responsibility to assert our own enworlding intuitions and thoughts against simply being passively enthinged by what surrounds us. Just as existentialism grew out of Heideggerian ontology, I am “existentializing” ontological designing by looking at personal self-responsibility within a context that accepts all the same truths Willis presents here. 

The core measure of self-responsibility is the quality of one’s own “enworldment experience”. Is the world clear, maneuverable and valuable to you, or is it murky, paralyzing, and worthless/doomed? In other words, did you design your enworldment for usability, usefulness, and desirability, or did you passively or prematurely accept an enworldment that falls short (or worse, a social enthingment)?

My passionate belief is that we absolutely must start with what is experience-near (our own lives, our own active philosophies), physically-proximate (our own tools and places) and socially-connected (our actual relationships, especially our most dialogical ones) and gradually spiral outward to enclose widening peripheries. To believe we must fix what’s way out there, everywhere — the environment, society, politics, other people’s beliefs — is ontological designing’s version of existential bad faith, an attempt to evade self-determination with attempts at other-determination.

Please notice my language improvements. Heidegger’s hideous language has got to go. Everyone seems to want to preserve his terms, but this is the awkward language of discovery. It’s been nearly a century and its time to refine. There will be no “worlding” or “thinking” on my watch. Enworldment, and enthingment is vastly better, aesthetically and descriptively.

Room 101

If someone were designing my ideal Hell (or if you prefer atheistic imagery, Room 101) put me on a team that designs by committee for a committee. You don’t even have to sentence me for eternity. A month is plenty to get my teeth gnashing, and more than a month will reduce me to the blackest despair.

The thing that makes a design approach render clear social sense for me is that it makes sense of some region of the world in personal terms. We investigate how a specific person does specific things with specific things and experiences specific things, and our job is to make these interactions, artifacts and experiences good by the standard of that person. When learning from users, design researchers redirect all deflection of personal response (and users always try it) into speculation on how other people might respond with “we are interested only in what you think and feel, and what you would do.” By looking at responses one at a time, and only at the end finding any generalities, we rid ourselves of the noisy refractions of what people think other people think other people will think other people will think, which gives us more information on their social psychological folk-theories and and insights into how they would try to design the thing we are designing, than on their own personal responses to novel possibilities.

Speculating on how heterogeneous groups of people might react to a design, and designing for an audience instead of persons is a different art, and an important one. It changes the activity from a interpersonal one to a social one, to use Buber’s distinction. The skillset becomes that of constructing systems that conform to the social rules of that social setting. These rules help people participate as members of a group, performing standard roles, which entails selectively suppressing personal idiosyncrasies, for the sake of smooth social functioning. This means the construction, too, must use standard language, in standard ways, denoting familiar concepts, used in familiar ways. Change at this sphere of design is exponentially difficult and often requires power and some degree of coercion.

But if you are trying to do this kind of design in a group which is itself so large that it can no longer function by an interpersonal dynamic, but must adopt social rules to function, now we have something requiring a degree of talent for functioning within social rules to design things that function within set social rules. The smartest option in situations like this is to design activities with new, temporary social rules that “program” the group to interact differently to accomplish different outcomes. (Which is another way of saying: design and facilitate workshops, because a workshop is a temporary social setting with new roles and rules that afford new kinds of works and new work products.) Workshops can produce group outputs that differ from the usual, but they are still stiff lumbering things that never result in the kinds of surprising snd brilliant novelty interpersonal dialogue can produce. And that is probably fine. The stars for which very large organizations reach in their grandest moments are suspended like gravel in the upper reaches of clouds, somewhere above incompetent mediocrity but well-below that of the average novelist. Workshop outputs are plenty good enough, 95% of the time.


It just occurred to me: people who always operate by social rules (even their own invented rules), who play a role of their own self-identity (even their own original identity), and confine themselves to the categegories of their personal ontology (even an ontology of their own invention) — and consequently find it impossible to improvise in response to another in a dialogical setting feel, interact with others like workshop participants in little workshops of their own design.

Maybe this is what I despise about political types who see roles and rules governing all things. When the “personal is political” dialogue, deep invention, all the inexhaustibly surprising, creative potential of persons encountering the unique personal kernel in the heart of each person’s soul — the mutual conflagration of divine sparks —  is lost. Instead corporate stability is imposed and preserved.

Totalitarianism is eternal design by committee.


Room 101.

Anne-Marie Willis’s “Ontological Designing”

Yesterday, Nick freaked me out about the existence of Anne-Marie Willis’s paper “Ontological Designing”. I was so distressed about possibly being scooped, and also about the state of my current project — a distress possibly biologically amplified by an infected eyelid — that I barely slept last night. I was dreaming about this stuff.

Today I got up, read most of the paper and sent Nick the reply below, which seems worth keeping.

Ok, this is not what I am doing, though it is the kind of ontological designing Willis describes here that informs my project.

This paper appears to be written from the perspective of a user contemplating designs-ready-made, not a design practitioner reflecting on design-in-the-making (to adapt Latour’s distinction).

The experiences that feed my thought (experiences I am undergoing, unfortunately, though quite conveniently, on this very project) are the reworkings of understanding induced by the breaking of individual interpretations and understandings upon an (as yet) inconceivable design problem.

In these situations, designers are forced to instaurate new local micro-philosophies that permit collaborators with incommensurable understandings to “align” their efforts to design equipment that can be readily recognized in a present-at-hand mode, adopted, and then used in a ready-to-hand mode. I think this microphilosophizing is an underrecognized gap both in design practice (which tends to focus its thinking on its tasks at hand, and rarely to macrophilosophize) and in philosophy (which rarely participates directly in the kinds of hellish rarefied design projects that inform my concerns).

My work is describing what happens if we apply the lessons of constant local microphilosophizing back to macrophilosophizing.

I think it is important because I’m seeing the same dynamics I see in my mini-hells unfolding in the larger world in our incapacity to align on what to do about — well — everything. The disgruntled tolerance for the postmodern condition and its refusal to macrophilosophize (due to the po-mo allergy to grand narratives) has contributed to a deep fracturing and factionalizing of our citizenry.

And you can see that this idea of designerly coevolution completely misses the central problem: How do we agree on what to do in the first place, in order to world our world into a state where maybe it can coevolve us back into a more livable, peaceful condition? Everyone is full of end-solutions, but at a loss to explain or even frame the problem of why we can’t get there, except to invent theories of viciousness about those who refuse to cooperate. We do not know how to think these kinds of conflicts, which are essentially just political crises — but I think I do have some clarifying insights, thanks to my occasional hell-immersions, and my funny habit of trying to feel better by understanding their hellishness and applying the resulting insights back to my own grand narrative, which I happen to think is better than the ones that developed in the vacuum of public intellectuals being to smart and stylish to perform their duties.

Differentiating enworldment design

Over the weekend Susan pressed me for details on how an enworldment can be intentionally changed. How does enworldment design differ from Stoicism’s mental toughening-up exercises, or new age self-helpers who advise us to tell ourselves a new story? It was helpful to be forced to get concrete, and to make some contrasts with transformational methods with which enworldment design might be compared or confused.

Difference 1. Enworldment design is morally unopinionated. It does not pursue any single ideal. It could be applied to help a person become more serene, openhearted, generous, evangelical, etc., or their opposites, or none of the above. The goal is a matter of the unique person and that person’s context.

Difference 2. Enworldment design is epistemologically open, but rigorous. There is no single truth to learn or discover, but a plurality of truth possibilities. These possible truths are multiple, overlapping and exacting, based on what concepts are adopted for developing truth. But this is not an arbitrary relativism, because, while there is no single truth, the possibility of untruth is pervasive and incessant — errors, mistakes, lies, etc. harm truths and make them fail in practice in various ways.

Difference 3. Enworldment design is not willfully imposed on the world, but is instaurated within the world, with the active participation with whatever worldly entities enworlded in the project. The world is taken as a collaborative partner, with its own complex and largely mysterious tendencies and constraints which are discovered in the course of design, which might even change the very goals of the design. When worldly entities cooperate with the enworldment, truth happens. When worldly entities balk, disappear or sabotage the enworldment, untruth happens. (This, by the way, is my ANTsy flavor of pragmatism.)

Difference 4. Enworldment depends on the destructive and reconstructive power of inquiry. Truth is not some objectlike, noumenal thing preexisting out there which we try to unearth by digging through the phenomenal bracket, until we can pull it out, clean it off, inspect it and have it as what it is and always was. That crude description is closer to (though still very far off the mark) reality, which can never be contained by truth. Truth is only the relationship a person has with reality, and those possibilities are myriad. And those possibilities are fragile. All it takes is looking harder, and truth will always break apart, clearing ground for something new. But if that clearing is investigated, harder and harder, something new is always there. Sometimes the new thing is worse than the old thing, but that, too, can be cleared and replaced. So, evaluation, rejection, restarting, discovering, experimentally developing, testing — this is how the work proceeds.

A corollary to difference 4: Because no truth can withstand scrutiny, the fact that a truth has not withstood it does not obligate us to abandon it. Instead, we should ask questions about tradeoffs. Does the critique render this truth useless, now?  Does it expose a flaw that would make it malfunction under certain circumstances? Was the truth durable enough for our purposes, and we just broke it for no good reason, like a kid taking apart a toy? Is there a tougher or more interoperable concept readily available we can swap out? A concept is an instrument that does some things well, and other things less well, not a mystical status of a belief. And just because you can break an instrument, doesn’t mean you should break it, so critique judiciously.

Difference 5. Truth possibilities are myriad, but so are truth impossibilities, which is why honesty and good craft are indispensable. As with all design, truth to materials is paramount. Self-delusion, wishful thinking, subjective fudging (overstraining and abusing the mind’s famous flexibility) are all vices that will compromise an enworldment’s integrity, and make it produce untruth instead of truths.

Oh no. Out of time. I’m just going to list the other points in raw form so I don’t forget them.

Difference 6. Enworldment design uses design methods. One of these methods is to take the experience of the design, rather than its artifact as the ultimate goal of the work. And good thing, too, because an enworldment’s artifact is arrangements of tacit processes to which direct access is impossible. The processes can be learned (and are learned in successful reading of philosophy or religious scripture), but what was learned appears only in how one behaves or speaks, never given explicitly.

Difference 7. Enworldments should be useful, usable and desirable.

Difference 8. Enworldments are separated by massive, intensely unpleasant vacuums of incapacity — perplexity, faltering and indifference. Crossing these gulfs of nothingness is what separates the men from the boys.

There’s so much more. I really have to stop now, though.