All posts by anomalogue

Reality-creation community

Another passage from Hannah Arendt’s Between Past and Future:

In my studies of totalitarianism I tried to show that the totalitarian phenomenon, with its striking anti-utilitarian traits and its strange disregard for factuality, is based in the last analysis on the conviction that everything is possible — and not just permitted, morally or otherwise, as was the case with early nihilism. The totalitarian systems tend to demonstrate that action can be based on any hypothesis and that, in the course of consistently guided action, the particular hypothesis will become true, will become actual, factual reality. The assumption which underlies consistent action can be as mad as it pleases; it will always end in producing facts which are then “objectively” true. What was originally nothing but a hypothesis, to be proved or disproved by actual facts, will in the course of consistent action always turn into a fact, never to be disproved. In other words, the axiom from which the deduction is started does not need to be, as traditional metaphysics and logic supposed, a self-evident truth; it does not have to tally at all with the facts as given in the objective world at the moment the action starts; the process of action, if it is consistent, will proceed to create a world in which the assumption becomes axiomatic and self-evident.

Arendt is clearly someone who would have been a member of the reality-based community.

Meaning, means and ends

From Hannah Arendt’s Between Past and Future:

Marx’s notion of “making history” had an influence far beyond the circle of convinced Marxists or determined revolutionaries. … For Vico, as later for Hegel, the importance of the concept of history was primarily theoretical. It never occurred to either of them to apply this concept directly by using it as a principle of action. Truth they conceived of as being revealed to the contemplative, backward-directed glance of the historian, who, by being able to see the process as a whole, is in a position to overlook the “narrow aims” of acting men, concentrating instead on the “higher aims” that realize themselves behind their backs (Vico). Marx, on the other hand, combined this notion of history with the teleological political philosophies of the earlier stages of the modern age, so that in his thought the “higher aims” — which according to the philosophers of history revealed themselves only to the backward glance of the historian and philosopher — could become intended aims of political action. …the age-old identification of action with making and fabricating was supplemented and perfected, as it were, through identifying the contemplative gaze of the historian with the contemplation of the model (the eidos or “shape” from which Plato had derived his “ideas”) that guides the craftsmen and precedes all making. And the danger of these combinations did not lie in making immanent what was formerly transcendent, as is often alleged, as though Marx attempted to establish on earth a paradise formerly located in the hereafter. The danger of transforming the unknown and unknowable “higher aims” into planned and willed intentions was that meaning and meaningfulness were transformed into ends — which is what happened when Marx took the Hegelian meaning of all history — the progressive unfolding and actualization of the idea of Freedom — to be an end of human action, and when he furthermore, in accordance with tradition, viewed this ultimate “end” as the end-product of a manufacturing process. But neither freedom nor any other meaning can ever be the product of a human activity in the sense in which the table is clearly the end-product of the carpenter’s activity.

The growing meaninglessness of the modern world is perhaps nowhere more clearly foreshadowed than in this identification of meaning and end. Meaning, which can never be the aim of action and yet, inevitably, will rise out of human deeds after the action itself has come to an end, was now pursued with the same machinery of intentions and of organized means as were the particular direct aims of concrete action — with the result that it was as though meaning itself had departed from the world of men and men were left with nothing but an unending chain of purposes in whose progress the meaningfulness of all past achievements was constantly canceled out by future goals and intentions. It is as though men were stricken suddenly blind to fundamental distinctions such as the distinction between meaning and end, between the general and the particular, or, grammatically speaking, the distinction between “for the sake of…” and “in order to…” (as though the carpenter, for instance, forgot that only his particular acts in making a table are performed in the mode of “in order to,” but that his whole life as a carpenter is ruled by something quite different, namely an encompassing notion “for the sake of” which he became a carpenter in the first place). And the moment such distinctions are forgotten and meanings are degraded into ends, it follows that ends themselves are no longer safe because the distinction between means and ends is no longer understood, so that finally all ends turn and are degraded into means.

In this version of deriving politics from history, or rather, political conscience from historical consciousness — by no means restricted to Marx in particular, or even to pragmatism in general — we can easily detect the age-old attempt to escape from the frustrations and fragility of human action by construing it in the image of making.

It seems obvious to me that most people — or at least most people one is likely to encounter in a corporate environment — think exclusively in terms of fabrication.

Tweaking our way to greatness

Mere competence cannot surpass mediocrity, no matter how perfectly it achieves its goals.

This is because mediocrity conceives of excellence in negative terms: as an absence of flaws.

Excellence, however, is a positive matter, and it consists in the presence of something valuable.

*

The frank display of flaws can be a way to flaunt excellence.

The excellent, despite being deeply problematic or grossly distorted, is always preferable to those things about which nothing bad nor good can be said.

*

Many romantic relationships persist unhappily for the sole reason that nobody produce a flaw sufficiently terrible to justify it.

Thwarted fault-finding produces even deeper contempt than successful fault-finding.

*

Mere competence results from seeing only the commonplace, commonsense questions.

The questions are barely even noticed. Usually they are simply taken to be self-evident — implied by reality itself.

All effort is put into re-answering the questions a little better than last time. With each recitation, the answer is tweaked, refined, polished, paraphrased, flavored or garnished a little differently — but the answer is substantially the same, which is why it finds easy recognition.

*

Innovation doesn’t come from inventing better answers; it comes from discovering better questions.

Few people seem to know how to discover new questions, and this has much to do with the aversion most people have to the conditions necessary for finding them. People go about things in ways that actively prevent new questions from arising. Everything presupposes the validity of the old questions, and reinforces re-asking and expert re-telling.

We don’t actually love the old questions and we’re not really that enamored with the answers we produce. We only like the predictability of it all.

But is it that we hate new questions? Actually, no. As a matter of fact, once a new question is posed clearly, people love it. The essence of inspiration is feeling the existence of a new question.

What people really hate is the space between the old and new question — the space called “perplexity”, that condition where we are deeply bothered and disoriented by a something we can’t really point to or explain. We cannot even orient ourselves enough to ask a question.

This is the space Wittgenstein claimed for philosophy: “A philosophical problem has the form: “I don’t know my way about.'”

*

How do we enter perplexity? By conversing with others and allowing them to teach us how their understanding differs from our own. What they teach us is how to ask different questions than we’d ordinarily think to ask. But before we can hear the questions they are asking –usually tacitly asking — we to quiet our own questions. (Interrogations are only good for getting answers out of people.)

How do we avoid perplexity? By not allowing the other to speak. Instead we observe their behaviors, look for patterns, impose different conditions and look for changes. We may feel puzzled by the behaviors we see, but we can answer this puzzlement by trying out one answer after another until one turns out good enough, like a child trying to hit upon the correct multiple choice answer to a math problem without really understanding the material.

*

It appears that generative research has gone out of style. There’s a widespread belief that assembling a frankenstein of best practices parts and subsequently using analytics to detect and correcting all the flaws will somehow produce the same results, but more cheaply and reliably — and less harrowingly.

But, here’s the question: Can anyone produce even one example where tweaking transformed something boring into something compelling?

And then consider how many times you’ve watched something compelling tweaked to mediocrity.

Behavior tweaking

In general, people’s interest in one another is practical and behavioral. The minimum knowledge required to elicit desired behaviors and to prevent undesired behaviors from occurring is about all people want.

If we feel we have to understand a person’s experiences to accomplish this, we will make the effort, but otherwise, we will avoid these kinds of questions, because understanding experiences requires a kind of involvement in the other’s perspective resembling immersion in literature, where one’s own worldview is temporarily suspended and replaced with another. And sometimes we don’t come back, fully. Something of the literary world stays with in our own, and we see things differently. An understander stands a good chance of being permanently and sometimes profoundly changed by such modes of understanding.

What most people prefer is the kind of relationship scientists have toward matter. The behaviors of objects are observed in various conditions from a distance, and the knowledge is factual: when this happens, this follows. The matter doesn’t explain itself to the observer: the observer does all the explaining. Whatever intentional “thickness” is added to the behaviors is taken from the observer’s own stock of motives. This kind of objective knowledge doesn’t change us or how we see the world; it changes only our opinions about the things we observe.

For a brief moment, the business world felt it needed to understand other people as speaking subjects as opposed to behaving objects. And for a brief moment it appeared that business itself could be changed through the experience of this very new kind of understanding. But now analytics has developed to such a degree that businesses can return back to their comfort zone of objectivity, and tweak human behaviors through tweaking designs, until they elicit the desired behaviors.

Parental authority

Parental authority stands on two conditions: 1) the parent’s actual possession of superior knowledge of the child’s needs, and 2) the parent’s intention to apply that knowledge to benefit the child.

Parents sometimes use coercion outside of parental authority, often for the sake of the smooth operation of the household. This in itself is not illegitimate. The problems start when coercion is confused with authority. The primary perpetrators of this are those who actually do not know the difference, and therefore lack authority.

Why qualitative research?

Quantitative research methods (as valuable as they are) can never replace interviews and ethnographic research. Despite what many UXers think, the essential difference between ethnographic research and other forms of qualitative research is not  merely that it observes behavior in context, but rather, as Spradley notes in The Ethnographic Interview, that in ethnographic research the person being researched plays a role in the research quite different from that of other methods: the role of informant (as opposed to subject, respondent, actor, etc.). An informant doesn’t merely provide answers to set questions or exhibits observable behavior. An informant teaches the researcher, and helps establish the questions the researcher ought to attempt to understand — questions the researcher might never have otherwise thought to ask. An informant is far more empowered to surprise, to reframe the research, and to change the way the researcher thinks. In ethnographic research the researcher is far less distanced and intellectually insulated from the “object” of study, and is exposed to a very real risk of transformative insight.

This attitude toward human understanding goes beyond method, and even beyond theory. It implies an ethical stance, because it touches on the question of what a human being is, what constitutes understanding of a human being, and finally — how ought human beings regard one another and relate to one another.

*

The passage that triggered this outburst, from Hannah Arendt’s The Human Condition:

Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: “Who are you?” This disclosure of who somebody is, is implicit in both his words and his deeds; yet obviously the affinity between speech and revelation is much closer than that between action and revelation, {This is the reason why Plato says that lexis (“speech”) adheres more closely to truth than praxis.} just as the affinity between action and beginning is closer than that between speech and beginning, although many, and even most acts, are performed in the manner of speech. Without the accompaniment of speech, at any rate, action would not only lose its revelatory character, but, and by the same token, it would lose its subject, as it were; not acting men but performing robots would achieve what, humanly speaking, would remain incomprehensible. Speechless action would no longer be action because there would no longer be an actor, and the actor, the doer of deeds, is possible only if he is at the same time the speaker of words. The action he begins is humanly disclosed by the word, and though his deed can be perceived in its brute physical appearance without verbal accompaniment, it becomes relevant only through the spoken word in which he identifies himself as the actor, announcing what he does, has done, and intends to do.

*

The dream of quantitative research rendering qualitative research obsolete might be one more instance of an age-old fantasy: a world of people who are seen and not heard, who obey our predictions and commands, to whom we can dictate terms. Such beings cannot remind us of the difference between reality itself, and one’s own conceptions of it — and they leave the mind in peace to to be “its own place, and in itself can make a Heaven of Hell.” Hell is not other people, per se. It is speaking people showing us what we’d rather not know, which can strip us of what we knew but can no longer believe.

*

(Maybe we lack faith in our capacity to recover from loss of faith?)

Useless (or worse)

When chaos is experienced, a failure of reason has already occurred. In chaos we encounter realities our reason is not equipped to order and make sense of. This is the experience of perplexity, where we relive the horror of birth.

The only people in the world perverse enough to find meaning in such meaninglessness are philosophers. Wittgenstein said it best: “A philosophical problem has the form: I don’t know my way about.”

*

We prefer to believe the world is discovered bit by accumulated bit in a vacuum of space and knowledge. We want to believe in a world that is created ex nihilo. What is we have is established, and what isn’t is nothing.

We hate to believe in a world that is articulated from chaos, because we hate the consequence: the order we have lent to the world which has made it familiar and predictable could suddenly recede and  shock us with raw alienness.

*

This possibility — that the world can be revealed as strange — that makes people hate their neighbor. It is the neighbor, with his strange views, peculiar habits, and outlandish tastes, who jointly holds the potential to defamiliarize the world. The potential, though, is only actualized voluntarily by ourselves. Each person holds the power either to open the door to the neighbor, or to bar it. If the neighbor is invited in, if his views are seriously entertained, the two gathered in such a spirit of hospitality and truth are in a position to recognize that reality and our idea of reality are not identical. In some deeply disturbing and inexpressible way, reality transcends idea. Without the disruption of the neighbor, idea eclipses what is beyond idea, and becomes idol.

But the door can be barred. We are free to abide in the mind. “The mind is its own place, and in itself can make a Heaven of Hell.” By withholding the status of “neighbor” from all but the like-minded — those who ditto our opinions, who agree with us that the details of reality that appear to contradict our views (or more subtly the exclusive validity of our views) are irrelevant (if not outright deceptions), who share our antipathy toward our non-neighbors and agree with us that entertaining their ideas is fruitless at best (and possibly corrupting) — we find willing partners in reducing the world to pure idea. The impurity rejected is that of reality who transcends mere idea.

*

We stabilize our sense of reality through a variety of intertwined methods. One of these methods is by successfully observing and describing the world to ourselves. Another is to reliably anticipate or predict events, or even better to influence or control them. But perhaps the most important method for creating a solid sense of reality is to find agreement with others. This last method can compensate for the absence of the others.

*

[Solipsism] “is rare in individuals–but in groups, parties, nations, and ages it is the rule.”

*

When a group agrees with itself that whatever appears to be an anomaly is mere noise, or error, or deception, or irrelevance, it is able to avoid (or at least postpone) confrontation with anomalies, which are the sparks of chaos, the pinholes in our knowledge. Anomalies remind us how much more there is to things than we possess as individuals, or as members of a particular group.

It is easier to love the reality we have made for ourselves — our own sense of truth — than it is to love reality. Reality challenges us, makes claims on us, changes us. If we think of ourselves as discrete, unchanging, self-consistent beings, reality threatens our mortality. If we think of ourselves as connected, evolving, expanding creatures, reality offers us perpetual natality.

*

We hate the possibility of the situation that requires the aid philosophy, so we deny that possibility and we deny the use of philosophy. Philosophy is a waste of time at best, and most likely corrupting.

But perhaps there’s some validity to the suspicion. Like generals thrive on outbreaks of war, and doctors thrive on outbreaks of disease, philosophers thrive on outbreaks of disillusionment.

The slipperiest slope

The slippery slope argument is the slipperiest slope. In fact, it is the slipperiness itself, a universal lubricant that creates a friction-free abstract world where the slightest tilt automatically dumps whatever sits on it into an abyss of catastrophic consequences. The “friction” it removes is that of human judgment and responsibility — our ability to decide to change course.

Supra-individual mind

Every thought thinkable by an individual mind has already been thought. Future thoughts will come from people who know how to think collaboratively beyond their own individual capacity as responsible participants in a supra-individual mind.

This idea should not be mistaken for common “collectivism”. It is the very opposite of the mob mentality, where each individual is reduced to what all human beings have in common, becoming roughly identical, and behaving according to animal tribal instinct. Supra-individual thinking makes use of intellectual differences as well as commonalities. It is also different from hierarchical team thinking, where one mind understands the problem completely and then enlists the help of others to manage and execute. Supra-individual thinking means more than one person is required to participate if an idea is to be fully understood, so no one person has the “vision” in its entirety. Supra-individual thinking is also different from the kind of thinking that comes from (relatively) homogeneous groups, where once an idea is conceived by one member of the group, all are instantly and effortlessly able to grasp the idea, because arriving at the idea was simply a matter of quickness or luck. Supra-individual thinking arrives at agreements, but not agreements where each person holds an identical conception and opinion, but rather where each person holds conceptions and opinions compatible with the others in guiding collaborative action. And finally supra-individual thinking is not a division of labor among experts in different disciplines. The coherence is not mere systematization of separate black-box parts, but organic, conceptual coherence. Supra-individual thinking is unified intuitively and tacit-practically as well as rationally.

In collaborative thought, the group somehow comes to know something coherently, which is only later completely understood by some or all of the group, but in the meantime is effectively applied to real-world problems.

*

Supra-individual mind is similar to common sense, in the meaning of “the sense of reality arising from the five senses perceiving together”. It’s the blind men and the elephant story, except with temperamental/psychological differences substituted for circumstantial ones.

*

Supra-individual mind is the concrete actualization of pluralism. It begins with tolerance and skepticism, but then moves far beyond them.

Geertz on irony

Geertz: (From his essay “Thinking as a Moral Act”):

“Irony rests, of course, on a perception of the way in which reality derides merely human views of it, reduces grand attitudes and large hopes to self-mockery. The common forms of it are familiar enough. In dramatic irony, deflation results from the contrast between what the character perceives the situation to be and what the audience knows it to be; in historical irony, from the inconsistency between the intentions of sovereign personages and the natural outcomes of actions proceeding from those intentions. Literary irony rests on a momentary conspiracy of author and reader against the stupidities and self-deceptions of the everyday world; Socratic, or pedagogical, irony rests on intellectual dissembling in order to parody intellectual pretension.”

It seems to me that systems thinking — at least thinking about systems in which the thinker is a participant — might require a certain degree of irony. Our experience of being caught up in a system is one thing, but what is required to adjust or change the system is another — and the connection is rarely obvious. That experience is an intrinsic part of the workings of many systems, particularly management systems.

Limits of the explicit

Explicit forms of understanding and communication (explicit truth) can represent only some aspects of reality. In conflicts between rationalism and irrationalism, enlightenment and romantic ideals, suits and creatives, what is at stake is the leftover reality — its nature, its unity and/or multiplicity, how/whether truth can be established/shared, and how it relates to those realms of reality that can be known and spoken of explicitly.

My own hunch is that the non-explicit aspects of reality are precisely those that matter to us, and the near-universal requirement that things be known and spoken of in an explicit mode serves as a filter that systematically filters the non-explicit from consideration in most collective endeavors.

I also think the non-explicit aspects of reality are precisely those that most need to be agreed upon and shared, but this agreement and sharing is different from agreement on fact or sharing a belief in the validity of an argument.

Conserving, simplifying, forgetting

When a person calls himself a “conservative” what precisely is it that is conserved? Is it ideas? Do conservatives wish to keep valued ideas intact and pure?

Or is it a wish to conserve our limited store of moral energy? Despite what we would like to believe, we cannot just will this energy into existence, because will itself is constituted of this energy.

And even if energy were unlimited, time is indisputably limited. If we so expend most of our energy and time sifting through a near-infinite number of details, then wrestling to organize the mess into something clear and cohesive, wouldn’t the result of this effort be so complicated and unwieldy that our efforts would be hopelessly encumbered (not to mention pleasureless)?

It seems our choice is somewhere on a continuum ranging between “analysis paralysis” in the face of innumerable disorganized facts on one hand an or decisive, energetic action based on simplification verging on willful ignorance on the other. To put it in Yeats’ words, “The best lack all conviction, while the worst / are full of passionate intensity.” I think this tendency grows more and more exaggerated as the old fundamental thought-structures of a culture begin to give out under the pressures of new social conditions, and new underdeveloped and over complicated ones vie (lamely) to replace them.

*

Does change resulting from consideration of new and multiple perspectives necessarily mean appending and complicating our idea-world, and making it increasingly unlivable? Probably at first. But thinking deeply can also have a simplifying effect. But this simplification itself takes time and energy, and modes of thinking many people find even more uncomfortable than dealing with baroquely-rehacked, elaborately epicycled and recycled concepts.

Perhaps it is not over-simplification that makes ideologies so damaging to the world — since, after all, all thinking and all abstraction involves selective forgetting and remembering (what we call discerning relevance and discovering generalities) — but rather that the simplifications take into account only what one group or another considers relevant.

Shibbolethargy

Shibbolethargy: A form of intellectual laziness which uses the tools of thought (ideas, concepts, arguments and symbols) to create an appearance of rigorous thought, when in fact the true aim is to signal one’s membership in some particular tribe (and consequently unconditional opposition to other tribes).

At the root of shibbolethargy is the desire to evaluate ideas and actions ad hominem rather than on their own merits, while appearing to rely on principle and reason.

The attitude a shibbolethargic critic strikes is this: when confronted by an uncomfortable, semi-/un-comprehended idea, the most efficient means to evaluate it is to trace it back to the root, to see from what ground the idea has grown (rather than take the opposite course — which requires more trust, time and work — to judge the tree by its fruits). The root of the idea is the believer. If the believer is found to be a victim/perpetrator of some pernicious, delusional ideology, then by extension the idea is contaminated, and all efforts to understand the idea will at best be unfruitful and at worst can result in ideological contamination.

In the end, while many words may be used, many elaborate arguments, memorized and recited, many stories told both anecdotal and historical, no thought has been done and no new understanding has been found. The old understanding is defended and preserved, not so much through understanding and responding to other ideas, but rather through proving (solely to the satisfaction of the defender) that understanding and responding to other ideas is unnecessary — and probably dangerous to boot. In other words, that one is unwilling to see why he ought to think something he has not already thought.

Decision-making scenarios

Scenario 1 (thesis)

A: “Maybe this will work…”

B: “Before we commit the effort, can you explain how it will work, assuming it might, keeping in mind we have limited time and money?”

A: “I think so. Give me a day.”

B: “We don’t have a day to spare on something this speculative. Let’s come up something a little more baked.”

… and [eventually, inevitably]

B: “So, what are the best practices?”

Scenario 2 (antithesis)

A: “I have a hunch this will work. Let’s go with it.”

B: “Can you explain how it will work?”

A: “Trust my professional judgment. My talent, training, experience, [role, title, awards, track record, accomplishments, etc.] distinguish my hunches.”

Scenario 3 (synthesis)

A: “I have a hunch this might work. Hang on.” … “Whoa. It did work. Look at that.”

B: “How in the world did that work?”

A: “I don’t know. Let’s try to figure out why.”

Shhhhhhh

Here’s what I learned from the Pragmatists (mostly via Richard J. Bernstein, who has probably had a deeper and more practical impact on how I think, work and live than any other author I’ve read): An awful lot of what we do is done under the guidance of tacit know-how.

After we complete an action we are sometimes able to go back and account for what we did, describing the why, how and what of it — and sometimes our descriptions are even accurate. But to assume — as we nearly always do — that this sort of self-account is in some way identical to what brought these actions about or even what guided them after they began is an intellectual habit that only occasionally leads us to understanding. Many such self-accounts are only better-informed explanations of observed behaviors of oneself, not reports on the actual intellectual process that produced the behaviors.

To explain this essential thoughtlessness in terms of “unconscious thoughts” that guide our behavior as conscious ones supposedly do in lucid action is to use a superstitious shim-concept to maintaining this mental/physical cause-and-effect framework in the face of contrary evidence. I do believe in unconscious ideas that guide our thoughts and actions (in fact I’m attempting to expose one right here), but I do not think they take the form of undetected opinion or theories. Rather they take the form of intellectual habits. They’re moves we just make with our minds… tacitly. Often, we can find an “assumption” consequent to this habitual move and treat this assumption as causing it, but this is an example of the habit itself. It is not the assumption there is a cause that makes us look for the cause, it is the habitual way of approaching such problems that makes us look for an undetected opinion at the root of our behaviors. We don’t know what else to do. It’s all we know how to do.

*

I’m not saying all or even most behavior is tacit, but I do believe much of it is, and particularly when we are having positive experiences. We generally enjoy behaving instinctually, intuitively and habitually.

*

Problems arise mainly when one instinct or intuition or habit interferes with the movements of another. It is at these times we must look into what we are doing and see what is unchangeable, what is variable and what our options are in reconciling the whole tacit mess. The intellectual habit of mental-cause-physical-effect thinking is an example of such a situation. Behind a zillion little hassles that theoretically aren’t so big — no bigger than a mosquito buzzing about your ears — is the assumption that we can just insert verbal interruptions into our stream of mental instructions that govern our daily doings without harming these doings. As I’ve said before, I do think some temperaments operate this way (for instance, temperaments common among administrators and project managers), but for other temperaments such assumptions are at best wrong, and at worst lead to practices that interfere with their effectiveness.

Software design and business processes guided by this habit of thought tend to be sufficient for verbal thinkers accustomed to issuing themselves instructions and executing them, but clunky, graceless and obtrusive to those who need to immerse themselves in activity.

*

It is possible that the popular “thinkaloud” technique in design research is nothing more than a methodology founded on a leading question: “What were you thinking?” A better question would be: “Were you thinking?”

*

The upshot of all this: We need to learn to understand how the various forms of tacit know-how work, and how to research them, how to represent them in a way that does not instantly falsify them, and how to respond to them. And to add one more potentially controversial item to this list: how to distinguish consequential and valuable findings documentation versus mere thud-fodder which does nothing in the way of improving experiences, but only reinforces the psychological delusions of our times. If research can shed this inheritance of its academic legacy — that the proper output of research is necessarily a publication, rather than a direct adjustment of action — research can take a leaner, less obtrusively linear role in the design process.

Pluralism, education, competition, and brand

Some forms of competition support pluralism, and some forms of competition undermine it. This fact has become conspicuous to me looking at the issue of school competition.

If K-12 schools were to compete like universities, creating areas of distinction, basing their claims of excellence on the accomplishments and reputations of faculty and alumni, that would be a form of school competition that would generate diverse approaches to education, suitable to a wide variety of adult destinies. But if school competition were to become a matter of who produces the highest standardized test scores, I think it would have the opposite effect. The differences would center around pedagogical techniques for approaching as closely as possible a predetermined ideal.

*

I wish I could find the source, but years ago I read an article that claimed that what was different about the American business culture — the very secret of its flourishing — was its nearly-reckless environment of forgiveness, which encouraged risk, experimentation, optimism and consequently innovation. In Japan, if you took a risk and blew it, that was it for your career. In America, you were admired for your daring.

My question is this: Is our educational system encouraging or undermining this kind of inventiveness. Historically, how much has America’s success rested on technical proficiency — math and science — and how much on sheer confidence? Maybe those ludicrously high self-esteem scores of our students, so frequently ridiculed (most recently in Waiting for Superman) are actually a success indicator.

My fear, to put it in brand terms, is that the USA has turned its back on its brand, and has committed itself to becoming and international commodity. Our educational system is part of our unconscious national brand activation.

*

And to circle this whole mess around to the start, I think what attracts me to brand is that competition between brands, to the degree that the brands really are positioned against one another, is a pluralistic mode of competition. Multiple standards of excellent compete against one another for business.

Research: intuition transference

I’m trying to develop a thought, and I suspect it’s already been worked out and articulated somewhere, but it sure isn’t present in the business world. It’s related to a point a friend made to me recently, that much of anthropology (and of qualitative research in general) is over-focused on language and ignores much of the pre-/non-linguistic concrete reality that constitutes our private and cultural lives.

As designers, language is a big part of what we work with, but as most people will admit, the best designs are great because they relieve us of the necessity to think in language. We just use our tacit know-how and accomplish what we wanted, without ever verbalizing the means or the ends. Designs that require users to stop and verbalize everything as they go are inadequate to varying degrees, based, I think, on the temperament of the user. I am convinced some people live their lives in verbal self-dialogue on most matters, oscillating between verbal thought and execution of what is thought, where others lose themselves in tacit activity, and every requirement to think verbally is an unwelcome interruption. This has serious UI design implications, because the former wants things spelled out explicitly, where the latter is feeling for intuitive cues largely invisible to many users.

I’m the second kind of temperament, and it really is why I don’t like to look at clocks, lists or timesheets, because it destroys the continuity of my activity. Even when I’m working in words, the words are not explicit questions and answers, but more like blocks I’m mutely playing with. I think this is a Wittgenstinean thought: I’m developing a tacit know-how in the use of language to do some particular thing that I can’t yet verbalize, not entirely unlike building a house using a command language.

I think language is a very flexible instrument, and based on how well developed it is, it is able to justly articulate much of what goes on in the tacit practical world, and once it is able to do this, it becomes instrumental, capable of being used in planning and executing. My real question is this: how valuable an investment is the development of language in design projects? What are the possible tradeoffs?

  • We can inadequately describe the worldviews of our designands (sorry, experimenting with a coinage), and save time, and money at the expense of articulate understanding and design quality.
  • We can adequately describe the worldviews of our designands, and gain articulate understanding and design quality at the expense of time and money.
  • We can dispense with description of worldviews of our designands, and gain design quality for less time and less money, at the expense of articulate understanding.

Here’s a thought: when we write an ethnography, what we are really doing is designing language and models to help some particular audience cultivate some particular relationship with people of some culture. This sounds functionalist, but I think it sort of protects us from mere functionalism in the way that phenomenology protects metaphysics precisely by setting it outside the domain of its inquiry. This approach protects the dignity of informants by throwing out every pretense of comprehending them as people, and instead comprehending what is relevant to relating to them.

The role of design researcher

In most places I’ve worked, design research is conducted primarily or exclusively by people playing a researcher role. The researcher’s job is to learn all about the users of a system a team is preparing to design, to document what they have learned and then to teach the design team what they need to know to design for these users. Often the information/experience architect(s) on the project will conduct the research then shift from the researcher role to a designer role. Often content and visual designers will (optionally) observe some of the sessions as well. But it is understood that in the end, it will be the research findings and the testimony of the researcher that will inform the design in its various dimensions (IA, visual, content, etc.).

It is time to question this view of research. When a design feels dead-on perfect and there’s something about it that is deeply satisfying or even moving, isn’t it normally the case that we find that rightness defiant of description? Don’t we end up saying “You just have to see it for yourself.” And when we want to introduce two friends, we might try to convey to them who the other person is by telling stories, giving background facts or making analogies, but in the end we want our friends to meet and interact and know for themselves. Something about design and people — and I would argue, the best part — is lost in descriptions.

My view is that allowing researchers and research documentation to intercede between users and designers serves as a filter. Only that which lends itself to language (and to the degree we try to be “objective”, to the kind of unexpressive and explicit language least suited to conveying the je ne sais quoi qualities that feed design rightness) can make it through this filter. In other words, design documentation, besides being half the cost of reseach not only provides little value, it subtracts value from the research.

What is needed is direct contact between designers and users, and this requires a shift in the role of researcher and in research documentation. The role of researcher would become much more of a facilitator role. The researcher’s job now is to 1) determine who the users are, 2) to ensure that research participants are representative users, which means their screening responsibilities are increased, 3) to create situations where designers can learn about users directly from the users themselves, not only explicitly but also tacitly, not only observationally but interactively, 4) to help the designers interpret what they have learned and to apply it appropriately to their designs.

In this approach, design documentation does not go away, but it does become less of the primary output of research, and more of a progress report about the research. The primary tangible output of the research should be design prototypes to test with users, to validate both the explicit and tacit understandings developed by the design team. But the real result of research is the understanding itself, which will enable the team to produce artifacts that will be indescribably right, seeing that this rightness has been conveyed directly to the team, not forced through the inadequate medium of description.