Category Archives: Design

Want creativity for real?

If you want creativity here’s what you really need:

  1. The right approach.
    Business approaches things in a way that’s good for many things, but generally not good for creativity. For one thing, everything’s decided in meetings, through explicit communication by words, numbers and images. Explicit communication will not produce creativity. For another, business loves step-by-step processes. When you say creative process, your average business person will think you’ve got some assembly line string of techniques by which a creation is built up bit by bit. Trying to do things this way guarantees sterility. Creativity requires a lot of pre-verbal (or even permanently non-verbal) intuitive leaps which though testable are not provable, and these leaps cannot be constructed, extracted, extruded or in any way fabricated, but only prepared for, stimulated, coaxed, encouraged — all highly un-macho approaches, which will drive the average exec nuts waiting, and will tempt him reach for the nearest convenient analytical tool to cut through the bullshit and dig out the golden egg.
  2. The right expectations.
    Let’s get this straight up front: Creativity is harrowing. It is non-linear, unpredictable, risky, and in practice often feels like shit. If your organization cannot handle this reality, you’ll have to compete with something other than meaningful differentiation — probably organizational effectiveness. That’s okay. A lot of companies find success that way. And like everyone, you’ll probably talk all about your revolutionary innovations and nobody’ll believe you, and you’ll do just fine. You’ll never be anything like Apple, Nike, Starbucks, Virgin, etc., etc., though.
  3. The right team.
    It is taboo to say this, but it is totally true, and you know it. Most people are not creative. Not only are they uncreative, but they’re creativity poison, because they cannot stand the feeling of being exposed to creative processes and do everything in their power to make that feeling go away (because of all the unpleasant characteristics, listed in the point above). Putting the wrong people on a creative team will make creativity impossible. I don’t know why executives who pride themselves on their cold-eyed realism and their ability to make hard calls and all that go all mushy sentimental on this point, but it would profit them to get realer, meaner and tougher on this point and staff the kitchen with people who can take the heat. But no. Everyone’s packed right in, and people are running around sweating and bitching about getting singed on the burner, and that the raw eggs and the baking soda don’t taste like cake. It’s damn hard to get anything cooked.
  4. The right inputs.
    Many designers secretly or openly detest research. And they should. Because all most research does is tie a designer’s hands by telling them all the cool stuff they want to do won’t fly. It closes down possibilities. But if you were to give designers something that opens up possibilities by inspiring them to conceive totally new approaches they’d eat it right up, because that is what designers live for. The type of research finding that opens up possibilities is an insight. Few marketing/insights departments know how to provide insight, even though they believe that providing customer “insights” is their core competency. When they say “insights” what they mean is facts — information about customers — their stats, behaviors, needs, wants, attitudes, and what have you. Insights are not essentially factual, and they are often not even expressible in language at all. The best source of insights is actually exposure to concrete people, environments and situations, and the best expression of those insights are often not words, graphs, or even cool diagrams, or anything else you might expect to find in a report, but rather ideas on what might work for those people in those environments and situations. But when this happens there’s always some process prig lying in wait ready to tell them they’re “getting ahead of themselves” and that their ideas are premature. They’re wrong. These premature ideas are the expression of having an insight. Don’t get attached to the ideas, but do keep them, because they are raw insight ore that can be melted down, refined and articulated — or simply “gotten”.
  5. The right conditions.
    Creativity is not only ugly and temperamental, it is also needy and fragile. It needs protection, but protection of a kind that seems counter-intuitive. To protect creativity, you have to restrain yourself from protecting the participants from the painful effects. If a creative team is not struggling in the dark, suffering from intense anxiety, infighting, bickering, hating it, with no end in sight until the end is suddenly in sight, they’re not doing anything that will blow anyone’s mind. Let them suffer. But don’t add more pain. Don’t interrupt them with the chickenshit that you think is urgently important. Think of the creative team’s hell as a pressurized tank. Your interruption will puncture it and let out all the pressure and deflate what’s trying to happen. As if this weren’t already too much, there’s one more indulgence you should lavish on your ugly-ass creative process: provide decent space with room to draw, sit, stand, fight, walk comfortably, all with minimal outside stimuli. A rule of thumb: keep creative suffering pure of mundane contaminants.
  6. The right tests.
    The usual tests of validity of ideas in business cannot do justice to creative ideas: 1) demanding analytical justification for why something will work, and 2) submitting it to the semi-informed opinions of people sitting around in a conference room. This procedure is 95% certain to kill off ideas that would work and support crappy ideas that should never have seen the light of day. The only legitimate way to test a creative idea is to prototype it and put it in front of real live human beings. After a prototype test is done and the idea survives it or its suckiness is exposed… then arguments for and against that idea and how it tested can be made.
  7. The right support
    Creative ideas need support before, in the form of these all these items in this list. But also, you need people to make the ideas happen. To execute. Such people are called “executives”. If you throw responsibility on creatives to make execution happen, ideas will always be proved impractical, because execution is a talent of its own. That’s because creative vision and genius for execution are two entirely different talents that do not always coincide in the same personality. One of the great things about business is that we get to combine our talents in ways that cancel out our weaknesses and allow us to accomplish things that would otherwise be impossible. A smarter division of labor based on more realistic psychology, that permits creatives to conceive visionary ideas and executives to execute and actualize them would produce far more brilliant results.

Heidegger on the user

From Being and Time:

The work produced refers not only to the “towards-which” of its usability and the “whereof” of which it consists: under simple craft conditions it also has an assignment to the person who is to use it or wear it. The work is cut to his figure; he ‘is’ there along with it as the work emerges. Even when goods are produced by the dozen, this constitutive assignment is by no means lacking; it is merely indefinite, and points to the random, the average. Thus along with the work, we encounter not only entities ready-to-hand but also entities with Dasein’s kind of Being — entities for which, in their concern, the product becomes ready-to-hand; and together with these we encounter the world in which wearers and users live, which is at the same time ours. Any work with which one concerns oneself is ready-to-hand not only in the domestic world of the workshop but also in the public world. Along with the public world, the environing Nature [die Umweltnatur] is discovered and is accessible to everyone.

 

Diego Rodriguez’s 21 Innovation Principles

Outspiral process

I need to rethink my outspiral process and incorporate my recent insight that chaos has two different meanings, depending on whether it is applied to objective vs subjective truth.

  • Objective chaos is negative — vacuum: absence of order.
  • Subjective chaos is excessive positivity — infinitude: an unmanageable plurality of interfering orders that overwhelms all attempts at singular determination.

These two forms of chaos can occur together as total chaos, but they often do not. Partial chaos is more common, because it is more stable. Objective order will tolerate/promote/create subjective chaos to preserve itself. Subjective order will tolerate/promote/create objective chaos to preserve itself. Each form of partial chaos has its advantages, but those advantages are bought at a very high price.

My outspiral process is designed specifically to overcome stable partial chaos by drawing it into total chaos and then leading it through partial orders into a subjective-objective order. (I am avoiding the expression “total order” for obvious reasons. Fair warning…)

*

I recognize this line of thought in Roland Barthes, The Pleasure of the Text.

Imagine someone… who abolishes within himself all barriers, all classes, all exclusions, not by syncretism but by simple discard of that old specter: logical contradiction; who mixes every language, even those said to be incompatible; who silently accepts every charge of illogicality, of incongruity; who remains passive in the face of Socratic irony (leading the interlocutor to the supreme disgrace: self-contradiction) and legal terrorism (how much penal evidence is based on a psychology of consistency!). Such a man would be the mockery of our society: court, school, asylum, polite conversation would cast him out: who endures contradiction without shame? Now this anti-hero exists: he is the reader of text at the moment he takes his pleasure. Thus the Biblical myth is reversed, the confusion of tongues is no longer a punishment, the subject gains access to bliss by the cohabitation of languages working side by side: the text of pleasure is a sanctioned Babel.

This, of course, corresponds to Nietzsche’s concept of the Dionysian.

I’m reading  The Pleasure of the Text on the basis of another conceptual recognition, the concept of readerly and writerly texts, a problem that has been central to my own thinking since 2003.

*

A painter uses pigments to create forms that draw the active viewer into his world.

A musician uses sounds for the same purpose. Nobody but a muzo listens to notes.

A philosopher uses truth assertions to draw the active thinker into his world. Philosophers are a species of artist, but because few people can see how truth and reality are not identical, their artistry is as invisible as the air we breathe.

*

“How’s the water, boys?”

 

Design problems vs engineering problems

Many problems are left unresolved because they are design problems misidentified and approached as engineering problems.

*

To conceive a situation as a design problem means to approach the situation with the intention of improving it, by acting into the situation with some kind of system that does something for someone.

Breaking the problem down into component parts:

  1. A design improves a situation, which means it requires some clarity on what constitutes an improvement. In other words, design is guided by some kind of ideal.
  2. A design acts into a situation, which means design fits into a larger context and becomes a constituent part of it.
  3. A design is a system, which means it is a set of interacting or interdependent components forming an integrated whole.
  4. A design does something, which means it performs specific functions.
  5. A design is intended for someone, which means the design successful to the degree that it is valuable to people for which it is intended.

*

Engineering problems are embedded within design problems, as point three and four: a system that does something. To isolate this part of the problem from the context of the situation and to suspend the consideration of the people for whom the system is intended is to define an engineering problem.

This makes the problem as it is defined easier to solve. But this comes at the cost of solutions that fit into situations and actually improve them significantly, despite the fact that they meet all defined requirements.

Pain of innovation

The primary obstacle to innovation of every kind is the pain of philosophy, which begins as angst before blooming into perplexity.

*

We don’t hate new ideas because they’re new.

We don’t even hate new ideas because they displace beloved old ideas.

We hate new ideas because they require the creation of conceptual vacuum before we can understand them.

A conceptual vacuum is not like empty space. It is empty of articulated order, which means it overflows with everything-at-once. It is chaos.

*

The depth of an idea means: “how much forgetting does it require in order to be understood?”

More depth = more forgetting = more pain.

*

Real innovation is the product of deep thought. That is, it involves forgetting the conventional wisdom of some realm of activity, re-conceiving it, and thinking out the consequences. This alone generates new ideas capable of inspiring people.

But most people have no taste for thinking, much less thinking in depth. They see thinking and doing — and especially creative doing — as opposed. To this sensibility, disciplined thought and research — anything that seems to question or negate can only encumber the creative process, which is understood to be purely positive. So the method is brainstorm the maximum number of ideas possible — very deliberately excluding thought.

What comes from this process is usually large heaps of uninspiring cleverness, which gets translated into forgettable products, services and marketing.

Doing something really different requires a hell of a lot more than ingenuity. It requires the courage to take the preliminary step of “thinking different”, and then the faith to relentlessly execute upon the new thinking. We reject what comes before and after, and pay attention only to the easy middle part.

  • Before: Philosophy
  • Middle: Ideation
  • After: Operationalization

*

The very deepest ideas draw us into the underworld of mind. To grasp them we must cross the river of forgetfulness, and then grope through limbo, without boundary stones, maps, compasses or stars to guide us. If we look back, all is lost. We are trapped in the old life, rooted to the institutional view, pillars of respectability.

*

When we rethink how we think, we gain freedom of movement, first in mind, then in body.

Practical philosophical reductionism

Less than a month ago I observed I’d collected three anti-method books in my library: After Method, Beyond Method, Against Method, and noted the absence of Before Method, Within Method, For Method.

I forgot that I also own For and Against Method, which is half argument for method, and Truth and Method, which argues against the existence of any universally valid hermeneutic technique.

*

I did not set out to collect books on method. I own these books because the concept of good method is one of the most effective (because it is the least questioned/questionable) vehicles for enforcing practical philosophical reductionism.

We fail to recognize how aggressive this is, partly because we tend to harbor a monistic orientation to “best”. We are seeking the best way, and if someone has already found it, we should set aside our own semi-articulate objections, preferences and intuitions and resist the temptation to “reinvent the wheel.”

This aligns with a general moral preference for self-effacement. We are eager to show that we can put our own preferences aside in the interest of a better outcome. This is admirable — if you’ve actually established the superiority of the less preferred method. But all too often we adopt a “pain, therefore gain” attitude that does nobody a bit of good.

Then, of course, many people don’t want to think philosophically. They just want to figure out what they’re doing, so they can get down to the doing. Thought is an unpleasant necessity that precedes making and executing plans. For such minds, method eliminates a lot of crap they didn’t want to do anyway. Re-considering method introduces unwelcome extra work of a kind they’d prefer not to deal with. It’s like making them slaughter the cow that will become their tasty cheeseburger. They’d rather just slap it on the grill, already.

Then finally there’s the “involvement anxiety” toward  participatory understanding that’s endemic to human sciences. We badly want to know without our own selves figuring into the equation. Even people who pride themselves on accounting for context when studying human subjectivity often want to subtract the themselves out of the context they do create — and must create — through their practice. A practitioner’s preference of method is taken to be a subjective impurity to be removed in the attempt to understand the others’ subjectivity more objectively.

It reminds me of a passage from Italo Calvino’s Invisible Cities:

After a seven days’ march through woodland, the traveler directed toward Baucis cannot see the city and yet he has arrived. The slender stilts that rise from the ground at a great distance from one another and are lost above the clouds support the city. You climb them with ladders. On the ground, the inhabitants rarely show themselves: having already everything they need up there, they prefer not to come down. Nothing of the city touches the earth except those long flamingo legs on which it rests and, when the days are sunny, a pierced, angular shadow that falls on the foliage.

There are three hypotheses about the inhabitants of Baucis: that they hate the earth; that they respect it so much they avoid all contact; that they love it as it was before they existed and with spyglasses and telescopes aimed downward they never tire of examining it, leaf by leaf, stone by stone, ant by ant, contemplating with fascination their own absence.

 

Chain of differentiations

Differentiated brands are rooted in differentiated offerings.

Differentiated offerings are rooted in differentiated strategies.

Differentiated strategies are rooted in differentiated operations.

Differentiated operations are rooted in differentiated organizational structures.

Differentiated organizational structures are rooted in differentiated roles.

Differentiated roles are rooted in differentiated personalities.

(By “differentiated personality”, I mean having a personality that doesn’t easily fit into a standard professional role definition.)

*

Undifferentiated brands have things  easier.

They have an easier time explaining themselves because they conform to expectations customers have already learned from their competitors. They have an easier time explaining their offerings because the offerings differ from others by well-established attributes. (“Ours is cheaper.” or faster, or lighter, or easier to use, or whatever.) They don’t have to put too much work into strategy, nor do they have to take risks with untried approaches to solving new problem. Instead they can assemble their strategy from readily-available and well-proven best practices. The same is true for their operations and hiring. They’ll find ready-made employees with ready-made knowledge of how to do things, who can just plug right into place and do their thing with no training required, and no adjustment to idiosyncrasies. Plug the role into the hole, and flip the switch and out comes industry-standard deliverables.

For all these reasons, and more, few companies choose to differentiate. Entire industries lack real differentiated brands. And it all works out fine, until it stops working.

Confabulation of method

It’s interesting to see how confabulation of method can result in excessive exaltation of algorithmic methods, often in the name of “being scientific” (a notion which has been exploded by such thinkers as Thomas Kuhn, Bruno Latour and John Law) and also in the opposite direction: dismissal of method in favor of excessive exaltation of intuition.

Both types of confabulation obscure the true path to success. For those who subscribe to these confabulations and try to put them into faithful practice will find the path to success mysteriously obstructed.

Experience research, strategy, design

Experience research helps an organization learn about other people so it can respond to their needs, perceptions and personalities more thoughtfully. The hope is that the research will turn up some new tidbit or constellation of tidbits (sometimes dead simple, sometimes complex) that so far everyone else has overlooked.

These insights, as we usually call them, enable the organization who commissioned the research to gain first-mover status is addressing and satisfying unmet needs, or in finding better ways to satisfy needs.

Experience research also helps us form clearer images of the types of people will use a design. These images — usually in the form of personas — serve a number of purposes. They both inspire and guide team members’ intuition during ideation and design. They are also a valuable critical tool, useful for reality checking design approaches at the macro- and micro-level, and for assessing the probable effectiveness of candidate concepts and designs in order to narrow the possibilities to the most promising (hopefully prior to testing).

The primary use of experience research is to inform experience strategy and experience design to help organizations provide better customer experiences. Better how? Better, as defined by the customer. Why? Because if the customer thinks the experience is better, the customer will 1) in the near-term behave in a way that profits the organization (support the business strategy), and 2) the organization will earn the loyalty of the customer (build brand equity).

This view of experience research and experience strategy and design is, to the best of my knowledge, the predominant one.

This vision of experience research/strategy/design is inadequate. Something elusive but essential is omitted.

 

Confabulated norms

Jonathan Haidt’s excellent and very accessible Happiness Hypothesis describes a fascinating phenomenon called confabulation which, to put it simply means that we often do not really understand the processes that drive our own behaviors, but despite this fact we unhesitatingly and innocently invent fictional explanations.

The concept of confabulation is not new. Nietzsche, for instance, observed it and ridiculed it from a hundred angles. Haidt, however, scientifically isolates the phenomenon, and promotes it from a very probable suspicion to a demonstrated fact: our own explanations of why we do things are often pure speculation. I can testify as a usability tester that we also confabulate how we do things.

Basically, any tacit mental process — any activity of the mind that cannot speak for itself — will be spoken for by the part of the mind that verbalizes, knows only verbalization and refuses to consider real anything that is not verbalized.

*

All this is fascinating enough, but I’m interested in something far more practical.

I’m interested in that next step we take when we accomplish something really admirable.

We ask: “How was that accomplished?”

And we confabulate an answer: “I followed my method.”

The confabulated method becomes a norm — a best practice — and is then imposed on others.

After all, hasn’t this method been shown to be effective? It is a reliable route to success.

*

Sometimes this imposition of method is resisted on the grounds that the full context is not being considered. It is not applicable to certain types of problems (this method will not be effective in this situation), or, less commonly, to certain temperaments of practitioners (this method might work great for you, given your cognitive style and background, but it might not be as helpful to this other person who is different from you in many ways.)

But confabulation opens up a whole other can of worms. Maybe the method didn’t cause the success. Maybe the method enabled some other tacit process to unfold in its own mysterious way. Maybe the method simply didn’t harm the tacit process, but gave it some cover of respectability. Or maybe the tacit processes happened despite the method. OR — maybe the method actually diminished the result, but not so completely that it ended in failure.

*

Think about how decisions are made in most organizations. A group of people sit around in a room and try to verbalize what ought to be done. The group wants to verbally understand what is about to happen. The groups wants to know what will be done, how it will be done and why it can be expected to work.

*

I’ve been reading literature from the field of Science and Technology Studies. Practitioners of STS use ethnographic research methods to watch how science is actually done. What they see confirms what Thomas Kuhn also saw: Science tends to suppress much of the experience and behavior of scientists, and to emphasize the discoveries — not only in scientific writing, but also in accounts of how science is done. The histories of science are rewritten in such a way that progress to the present appears straight and steady.

Kuhn:

Textbooks, however, being pedagogic vehicles for the perpetuation of normal science, have to be rewritten in whole or in part whenever the language, problem-structure, or standards of normal science change. In short, they have to be rewritten in the aftermath of each scientific revolution, and, once rewritten, they inevitably disguise not only the role but the very existence of the revolutions that produced them. (Kuhn, Structure of Scientific Revolutions)

Latour:

On June 2, 1881, in the little village of Pouilly-le-Fort in Beauce, Louis Pasteur defeated a terrible disease of sheep and cows, called anthrax. A friend of Pasteur’s gives this account: “Pouilly-leFort is as famous today as any other battlefield. Monsieur Pasteur, a new Apollo, was not afraid to deliver oracles, more certain of success than that child of poetry would be. In a program laid out in advance, everything that was to happen was announced with a confidence that simply looked like audacity, for here the oracle was pronounced by science itself, that is to say, it was the expression of a long series of experiments, of which the unvarying constancy of the results proved with absolute certainty the truth of the law discovered” (Bouley: 1 883, p. 439). The strategy was conceived entirely in advance; Pasteur concocted it and had every detail figured out; it went according to plan, following a strict order of command from Pasteur to the sheep by way of his assistants and the caretakers. (Latour, The Pasteurization of France)

The cash value of this idea?

What we understand to be scientific is not actually how science is accomplished.

My position is that the same is true in nearly every sphere of human activity, and doubly so wherever creativity happens. This includes education, management, design, social research — basically area of life where people are especially maniacal about method and most aggressively impose processes, standards, protocols and norms of every kind on one another.

Here’s how it goes:

  1. New ideas are conceived in intuitive leaps.
  2. The leaped-to ideas are tested in some way or another, artificially or in actuality.
  3. The leaps that pass the test are considered leaps forward to a goal.
  4. The leap forward is then traced backwards and rationalized. Reason creeps bit by bit from the goal to the origin, and attempts to account for the distance traversed in an unbroken chain of explanations.
  5. Then cause and effect are reversed. The story of the leap is confabulated. It is retold as a story of a steady and rational creeping forward toward a goal.
  6. The story makes perfect sense, and is accepted as the true account of the success.
  7. The creeping story is then formalized into a method, and imposed as a norm.
  8. Further attempts at progress are evaluated against their similarity to the proven method.
  9. Those who have strong belief in the method and who follow it faithfully produce respectable but unspectacular results. Those who ignore the method and flaunt that fact win little institutional support. Those who play the method game, but who leave themselves intuitive freedom win the most success.

*

I’ve had the unnerving experience of being forced to improvise when method failed, and succeeding — but discovering after that methods were attributed to my success, and that nothing I could say would persuade those who saw method where there was none that my success was fortunate (and easily could have been otherwise) and that none of it had a thing to do with following method. Had my improvisation failed, there is no doubt in my mind it would have been blamed on my deviation from method.

*

I think most methods are sheer chickenshit (in the technical sense).

I think most successes are accomplished by what most people would call bullshit. “Eureka” moments. Apples hitting the head. Ideas in the shower.

The key is entirely in testing — to establish that the leap is a good one — and then in the rational creep backwards to account for why the idea makes sense — but NOT as the method for how it was accomplished!

*

People who refuse to leap out of methodological conscience are depriving themselves of the pleasure of creativity. They limit themselves to incremental innovation.

People who leap without testing the leap deprive their sponsors of reasonable assurance. There’s nothing wrong with jumping to conclusions. All creative conclusions — good and bad — are jumped to. The key is to test them before acting on them. Whether they turn out for the better or for the worse, any untested leap is reckless.

If you rationalize the successful  leaps, figure out what made the leap work, you might discover principles that can fuel future leaps, and you can also integrate the accomplishment into the organizations body of knowledge. There’s value in the creep backwards.

BUT: do not reverse cause and effect and require everyone to demonstrate how they will creep to success before they are permitted to move.

*

If you hate dumb puns stop reading now.

Continue reading Confabulated norms

Experience planning

The primary task of experience planning is to provide designers with precision inspiration for making long and efficacious intuitive leaps. The secondary task is to provide criteria of efficacy for this particular project, by which intuitive leaps can be evaluated. This efficacy will always involve one or more user segments, a brand, and a relationship between user and brand in a complex use context.

 

Design thinking

Design thinking, though slightly more expansive than typical management thinking, still remains within the horizons of utilitarianism. To put it in Hannah Arendt’s language, the designer type still falls within the category homo faber.

*

There’s doing what’s always done. Execution.

There’s thinking about doing what’s always done. Management.

There’s rethinking what’s always done in order to find a better way of doing. Design thinking.

There’s rethinking our thinking: how we think about what we do…

There’s rethinking ought: why we do what we do…

*

“There are so many days that have not yet broken”

A short-lived fashion from the turn of the millennium?

At the end of her 2000 article “Ethnography in the Field of Design” Christina Wasson issued some warnings:

Although ethnography enjoys a great deal of popularity in the design field at present, I want to close with a cautionary eye to the future. Observing a similar phenomenon in CSCW, Hughes et al. (1994:437) noted: “Ethnography is currently fashionable in CSCW, but if it is to survive this kind of attention then it is important that the method find an effective voice rather than remaining content with ephemeral celebrity.”

. . .

Ten years from now, will ethnography be regarded as a short-lived fashion from the turn of the millennium? Its staying power depends on its ability to accurately purvey a unique kind of useful information to designers. And while the details of design firms’ ethnographic practices may not be public, there is a widespread sense among anthropologists in the design community that the quality of these firms’ research varies widely. The popularity of the approach has led a number of design firms to claim they offer “ethnography” even though none of their employees has a degree in anthropology or a related discipline. Sometimes researchers trained in cognitive psychology adopt observational methods; sometimes designers themselves do the observation. Such design firms are not necessarily averse to hiring anthropologists; they may have been unable to find ones with an adequate knowledge of the private sector.

As a consequence, the concept of ethnography has become a “pale shadow of itself’ (Wasson n.d.). In its most emaciated form, the term is simply used to refer to a designer with a video camera. Even in somewhat richer versions, the term has become closely identified with the act of observing naturally occurring consumer behaviors. The need to analyze those behaviors and situate them in their cultural context is poorly understood, even though these activities are essential parts of developing a model of user experience that leads to targeted and far-reaching design conclusions. The anthropological apparatus that stands behind ethnography — the self-reflexivity of participant observation, the training in theory that enables fieldworkers to identify patterns — these are poorly understood in the design field. Indeed, the association between ethnography and anthropology is not widely known. The term “anthropology” is almost never heard. Even Chicago’s Institute of Design, whose faculty has a fairly sophisticated understanding of the topic, describes ethnographic observation merely as “a method borrowed from social science research” on its Web site (Institute of Design 1997).

The tendency for design firms to skimp on analysis is due in part to financial pressures. It can be hard to persuade clients to fund adequate labor time for researchers to develop well-grounded interpretations. Merely claiming to do ethnography costs little; actually conducting substantive anthropological research is much more expensive. Clients are also chronically in a hurry and press for immediate results. Nonetheless, my worry is that the design firms that skimp on analysis will tend to produce less interesting results. In the long run, this could lead to the perception that ethnography doesn’t have much to offer after all. If that should happen — and I certainly hope it does not — an opportunity for anthropologists to help construct the world around us will have been lost. Those of us who are active in the design field can address this issue in several ways. First of all, it is my hope that the mechanism of the market may actually be of use and that anthropologists can create positive publicity for themselves by doing good work on their projects. It seems possible, at least, that clients will realize, over time, that the findings of design firms engaging in richer forms of ethnography outshine the findings of other firms. E-Lab/ Sapient’s continued growth is a hopeful sign. . . .

 

Design and the social system

Talcot Parsons, from The Social System:

Reduced to the simplest possible terms, then, a social system consists in a plurality of individual actors interacting with each other in a situation which has at least a physical or environmental aspect, actors who are motivated in terms of a tendency to the “optimization of gratification” and whose relation to their situations, including each other, is defined and mediated in terms of a system of culturally structured and shared symbols.

This is practically an inventory of the elements uncovered in design research.

  • Actors (which Parsons divides into ego, equivalent to “the user” in UX parlance and alter, which UX treats as elements of social context.)
  • Social context (relationships between actors)
  • Physical context (environment)
  • Needs (a.k.a. “optimization of gratification”)
  • Behaviors (involvement with events and artifacts in physical context)
  • Interactions (involvement with actors in social context)
  • Mental models (the defining/mediating symbol structure)
  • Signs (I’m adding this one: affordances that link mental models with real-world events and artifacts)
  • Symbols (I’m adding this one, too: indicators of the value-significance of real-world events and artifacts)

 

Decision-making scenarios

Scenario 1 (thesis)

A: “Maybe this will work…”

B: “Before we commit the effort, can you explain how it will work, assuming it might, keeping in mind we have limited time and money?”

A: “I think so. Give me a day.”

B: “We don’t have a day to spare on something this speculative. Let’s come up something a little more baked.”

… and [eventually, inevitably]

B: “So, what are the best practices?”

Scenario 2 (antithesis)

A: “I have a hunch this will work. Let’s go with it.”

B: “Can you explain how it will work?”

A: “Trust my professional judgment. My talent, training, experience, [role, title, awards, track record, accomplishments, etc.] distinguish my hunches.”

Scenario 3 (synthesis)

A: “I have a hunch this might work. Hang on.” … “Whoa. It did work. Look at that.”

B: “How in the world did that work?”

A: “I don’t know. Let’s try to figure out why.”

Shhhhhhh

Here’s what I learned from the Pragmatists (mostly via Richard J. Bernstein, who has probably had a deeper and more practical impact on how I think, work and live than any other author I’ve read): An awful lot of what we do is done under the guidance of tacit know-how.

After we complete an action we are sometimes able to go back and account for what we did, describing the why, how and what of it — and sometimes our descriptions are even accurate. But to assume — as we nearly always do — that this sort of self-account is in some way identical to what brought these actions about or even what guided them after they began is an intellectual habit that only occasionally leads us to understanding. Many such self-accounts are only better-informed explanations of observed behaviors of oneself, not reports on the actual intellectual process that produced the behaviors.

To explain this essential thoughtlessness in terms of “unconscious thoughts” that guide our behavior as conscious ones supposedly do in lucid action is to use a superstitious shim-concept to maintaining this mental/physical cause-and-effect framework in the face of contrary evidence. I do believe in unconscious ideas that guide our thoughts and actions (in fact I’m attempting to expose one right here), but I do not think they take the form of undetected opinion or theories. Rather they take the form of intellectual habits. They’re moves we just make with our minds… tacitly. Often, we can find an “assumption” consequent to this habitual move and treat this assumption as causing it, but this is an example of the habit itself. It is not the assumption there is a cause that makes us look for the cause, it is the habitual way of approaching such problems that makes us look for an undetected opinion at the root of our behaviors. We don’t know what else to do. It’s all we know how to do.

*

I’m not saying all or even most behavior is tacit, but I do believe much of it is, and particularly when we are having positive experiences. We generally enjoy behaving instinctually, intuitively and habitually.

*

Problems arise mainly when one instinct or intuition or habit interferes with the movements of another. It is at these times we must look into what we are doing and see what is unchangeable, what is variable and what our options are in reconciling the whole tacit mess. The intellectual habit of mental-cause-physical-effect thinking is an example of such a situation. Behind a zillion little hassles that theoretically aren’t so big — no bigger than a mosquito buzzing about your ears — is the assumption that we can just insert verbal interruptions into our stream of mental instructions that govern our daily doings without harming these doings. As I’ve said before, I do think some temperaments operate this way (for instance, temperaments common among administrators and project managers), but for other temperaments such assumptions are at best wrong, and at worst lead to practices that interfere with their effectiveness.

Software design and business processes guided by this habit of thought tend to be sufficient for verbal thinkers accustomed to issuing themselves instructions and executing them, but clunky, graceless and obtrusive to those who need to immerse themselves in activity.

*

It is possible that the popular “thinkaloud” technique in design research is nothing more than a methodology founded on a leading question: “What were you thinking?” A better question would be: “Were you thinking?”

*

The upshot of all this: We need to learn to understand how the various forms of tacit know-how work, and how to research them, how to represent them in a way that does not instantly falsify them, and how to respond to them. And to add one more potentially controversial item to this list: how to distinguish consequential and valuable findings documentation versus mere thud-fodder which does nothing in the way of improving experiences, but only reinforces the psychological delusions of our times. If research can shed this inheritance of its academic legacy — that the proper output of research is necessarily a publication, rather than a direct adjustment of action — research can take a leaner, less obtrusively linear role in the design process.

The role of design researcher

In most places I’ve worked, design research is conducted primarily or exclusively by people playing a researcher role. The researcher’s job is to learn all about the users of a system a team is preparing to design, to document what they have learned and then to teach the design team what they need to know to design for these users. Often the information/experience architect(s) on the project will conduct the research then shift from the researcher role to a designer role. Often content and visual designers will (optionally) observe some of the sessions as well. But it is understood that in the end, it will be the research findings and the testimony of the researcher that will inform the design in its various dimensions (IA, visual, content, etc.).

It is time to question this view of research. When a design feels dead-on perfect and there’s something about it that is deeply satisfying or even moving, isn’t it normally the case that we find that rightness defiant of description? Don’t we end up saying “You just have to see it for yourself.” And when we want to introduce two friends, we might try to convey to them who the other person is by telling stories, giving background facts or making analogies, but in the end we want our friends to meet and interact and know for themselves. Something about design and people — and I would argue, the best part — is lost in descriptions.

My view is that allowing researchers and research documentation to intercede between users and designers serves as a filter. Only that which lends itself to language (and to the degree we try to be “objective”, to the kind of unexpressive and explicit language least suited to conveying the je ne sais quoi qualities that feed design rightness) can make it through this filter. In other words, design documentation, besides being half the cost of reseach not only provides little value, it subtracts value from the research.

What is needed is direct contact between designers and users, and this requires a shift in the role of researcher and in research documentation. The role of researcher would become much more of a facilitator role. The researcher’s job now is to 1) determine who the users are, 2) to ensure that research participants are representative users, which means their screening responsibilities are increased, 3) to create situations where designers can learn about users directly from the users themselves, not only explicitly but also tacitly, not only observationally but interactively, 4) to help the designers interpret what they have learned and to apply it appropriately to their designs.

In this approach, design documentation does not go away, but it does become less of the primary output of research, and more of a progress report about the research. The primary tangible output of the research should be design prototypes to test with users, to validate both the explicit and tacit understandings developed by the design team. But the real result of research is the understanding itself, which will enable the team to produce artifacts that will be indescribably right, seeing that this rightness has been conveyed directly to the team, not forced through the inadequate medium of description.