Category Archives: Design

Design problems vs engineering problems

Many problems are left unresolved because they are design problems misidentified and approached as engineering problems.

*

To conceive a situation as a design problem means to approach the situation with the intention of improving it, by acting into the situation with some kind of system that does something for someone.

Breaking the problem down into component parts:

  1. A design improves a situation, which means it requires some clarity on what constitutes an improvement. In other words, design is guided by some kind of ideal.
  2. A design acts into a situation, which means design fits into a larger context and becomes a constituent part of it.
  3. A design is a system, which means it is a set of interacting or interdependent components forming an integrated whole.
  4. A design does something, which means it performs specific functions.
  5. A design is intended for someone, which means the design successful to the degree that it is valuable to people for which it is intended.

*

Engineering problems are embedded within design problems, as point three and four: a system that does something. To isolate this part of the problem from the context of the situation and to suspend the consideration of the people for whom the system is intended is to define an engineering problem.

This makes the problem as it is defined easier to solve. But this comes at the cost of solutions that fit into situations and actually improve them significantly, despite the fact that they meet all defined requirements.

Pain of innovation

The primary obstacle to innovation of every kind is the pain of philosophy, which begins as angst before blooming into perplexity.

*

We don’t hate new ideas because they’re new.

We don’t even hate new ideas because they displace beloved old ideas.

We hate new ideas because they require the creation of conceptual vacuum before we can understand them.

A conceptual vacuum is not like empty space. It is empty of articulated order, which means it overflows with everything-at-once. It is chaos.

*

The depth of an idea means: “how much forgetting does it require in order to be understood?”

More depth = more forgetting = more pain.

*

Real innovation is the product of deep thought. That is, it involves forgetting the conventional wisdom of some realm of activity, re-conceiving it, and thinking out the consequences. This alone generates new ideas capable of inspiring people.

But most people have no taste for thinking, much less thinking in depth. They see thinking and doing — and especially creative doing — as opposed. To this sensibility, disciplined thought and research — anything that seems to question or negate can only encumber the creative process, which is understood to be purely positive. So the method is brainstorm the maximum number of ideas possible — very deliberately excluding thought.

What comes from this process is usually large heaps of uninspiring cleverness, which gets translated into forgettable products, services and marketing.

Doing something really different requires a hell of a lot more than ingenuity. It requires the courage to take the preliminary step of “thinking different”, and then the faith to relentlessly execute upon the new thinking. We reject what comes before and after, and pay attention only to the easy middle part.

  • Before: Philosophy
  • Middle: Ideation
  • After: Operationalization

*

The very deepest ideas draw us into the underworld of mind. To grasp them we must cross the river of forgetfulness, and then grope through limbo, without boundary stones, maps, compasses or stars to guide us. If we look back, all is lost. We are trapped in the old life, rooted to the institutional view, pillars of respectability.

*

When we rethink how we think, we gain freedom of movement, first in mind, then in body.

Practical philosophical reductionism

Less than a month ago I observed I’d collected three anti-method books in my library: After Method, Beyond Method, Against Method, and noted the absence of Before Method, Within Method, For Method.

I forgot that I also own For and Against Method, which is half argument for method, and Truth and Method, which argues against the existence of any universally valid hermeneutic technique.

*

I did not set out to collect books on method. I own these books because the concept of good method is one of the most effective (because it is the least questioned/questionable) vehicles for enforcing practical philosophical reductionism.

We fail to recognize how aggressive this is, partly because we tend to harbor a monistic orientation to “best”. We are seeking the best way, and if someone has already found it, we should set aside our own semi-articulate objections, preferences and intuitions and resist the temptation to “reinvent the wheel.”

This aligns with a general moral preference for self-effacement. We are eager to show that we can put our own preferences aside in the interest of a better outcome. This is admirable — if you’ve actually established the superiority of the less preferred method. But all too often we adopt a “pain, therefore gain” attitude that does nobody a bit of good.

Then, of course, many people don’t want to think philosophically. They just want to figure out what they’re doing, so they can get down to the doing. Thought is an unpleasant necessity that precedes making and executing plans. For such minds, method eliminates a lot of crap they didn’t want to do anyway. Re-considering method introduces unwelcome extra work of a kind they’d prefer not to deal with. It’s like making them slaughter the cow that will become their tasty cheeseburger. They’d rather just slap it on the grill, already.

Then finally there’s the “involvement anxiety” toward  participatory understanding that’s endemic to human sciences. We badly want to know without our own selves figuring into the equation. Even people who pride themselves on accounting for context when studying human subjectivity often want to subtract the themselves out of the context they do create — and must create — through their practice. A practitioner’s preference of method is taken to be a subjective impurity to be removed in the attempt to understand the others’ subjectivity more objectively.

It reminds me of a passage from Italo Calvino’s Invisible Cities:

After a seven days’ march through woodland, the traveler directed toward Baucis cannot see the city and yet he has arrived. The slender stilts that rise from the ground at a great distance from one another and are lost above the clouds support the city. You climb them with ladders. On the ground, the inhabitants rarely show themselves: having already everything they need up there, they prefer not to come down. Nothing of the city touches the earth except those long flamingo legs on which it rests and, when the days are sunny, a pierced, angular shadow that falls on the foliage.

There are three hypotheses about the inhabitants of Baucis: that they hate the earth; that they respect it so much they avoid all contact; that they love it as it was before they existed and with spyglasses and telescopes aimed downward they never tire of examining it, leaf by leaf, stone by stone, ant by ant, contemplating with fascination their own absence.

 

Chain of differentiations

Differentiated brands are rooted in differentiated offerings.

Differentiated offerings are rooted in differentiated strategies.

Differentiated strategies are rooted in differentiated operations.

Differentiated operations are rooted in differentiated organizational structures.

Differentiated organizational structures are rooted in differentiated roles.

Differentiated roles are rooted in differentiated personalities.

(By “differentiated personality”, I mean having a personality that doesn’t easily fit into a standard professional role definition.)

*

Undifferentiated brands have things  easier.

They have an easier time explaining themselves because they conform to expectations customers have already learned from their competitors. They have an easier time explaining their offerings because the offerings differ from others by well-established attributes. (“Ours is cheaper.” or faster, or lighter, or easier to use, or whatever.) They don’t have to put too much work into strategy, nor do they have to take risks with untried approaches to solving new problem. Instead they can assemble their strategy from readily-available and well-proven best practices. The same is true for their operations and hiring. They’ll find ready-made employees with ready-made knowledge of how to do things, who can just plug right into place and do their thing with no training required, and no adjustment to idiosyncrasies. Plug the role into the hole, and flip the switch and out comes industry-standard deliverables.

For all these reasons, and more, few companies choose to differentiate. Entire industries lack real differentiated brands. And it all works out fine, until it stops working.

Confabulation of method

It’s interesting to see how confabulation of method can result in excessive exaltation of algorithmic methods, often in the name of “being scientific” (a notion which has been exploded by such thinkers as Thomas Kuhn, Bruno Latour and John Law) and also in the opposite direction: dismissal of method in favor of excessive exaltation of intuition.

Both types of confabulation obscure the true path to success. For those who subscribe to these confabulations and try to put them into faithful practice will find the path to success mysteriously obstructed.

Experience research, strategy, design

Experience research helps an organization learn about other people so it can respond to their needs, perceptions and personalities more thoughtfully. The hope is that the research will turn up some new tidbit or constellation of tidbits (sometimes dead simple, sometimes complex) that so far everyone else has overlooked.

These insights, as we usually call them, enable the organization who commissioned the research to gain first-mover status is addressing and satisfying unmet needs, or in finding better ways to satisfy needs.

Experience research also helps us form clearer images of the types of people will use a design. These images — usually in the form of personas — serve a number of purposes. They both inspire and guide team members’ intuition during ideation and design. They are also a valuable critical tool, useful for reality checking design approaches at the macro- and micro-level, and for assessing the probable effectiveness of candidate concepts and designs in order to narrow the possibilities to the most promising (hopefully prior to testing).

The primary use of experience research is to inform experience strategy and experience design to help organizations provide better customer experiences. Better how? Better, as defined by the customer. Why? Because if the customer thinks the experience is better, the customer will 1) in the near-term behave in a way that profits the organization (support the business strategy), and 2) the organization will earn the loyalty of the customer (build brand equity).

This view of experience research and experience strategy and design is, to the best of my knowledge, the predominant one.

This vision of experience research/strategy/design is inadequate. Something elusive but essential is omitted.

 

Confabulated norms

Jonathan Haidt’s excellent and very accessible Happiness Hypothesis describes a fascinating phenomenon called confabulation which, to put it simply means that we often do not really understand the processes that drive our own behaviors, but despite this fact we unhesitatingly and innocently invent fictional explanations.

The concept of confabulation is not new. Nietzsche, for instance, observed it and ridiculed it from a hundred angles. Haidt, however, scientifically isolates the phenomenon, and promotes it from a very probable suspicion to a demonstrated fact: our own explanations of why we do things are often pure speculation. I can testify as a usability tester that we also confabulate how we do things.

Basically, any tacit mental process — any activity of the mind that cannot speak for itself — will be spoken for by the part of the mind that verbalizes, knows only verbalization and refuses to consider real anything that is not verbalized.

*

All this is fascinating enough, but I’m interested in something far more practical.

I’m interested in that next step we take when we accomplish something really admirable.

We ask: “How was that accomplished?”

And we confabulate an answer: “I followed my method.”

The confabulated method becomes a norm — a best practice — and is then imposed on others.

After all, hasn’t this method been shown to be effective? It is a reliable route to success.

*

Sometimes this imposition of method is resisted on the grounds that the full context is not being considered. It is not applicable to certain types of problems (this method will not be effective in this situation), or, less commonly, to certain temperaments of practitioners (this method might work great for you, given your cognitive style and background, but it might not be as helpful to this other person who is different from you in many ways.)

But confabulation opens up a whole other can of worms. Maybe the method didn’t cause the success. Maybe the method enabled some other tacit process to unfold in its own mysterious way. Maybe the method simply didn’t harm the tacit process, but gave it some cover of respectability. Or maybe the tacit processes happened despite the method. OR — maybe the method actually diminished the result, but not so completely that it ended in failure.

*

Think about how decisions are made in most organizations. A group of people sit around in a room and try to verbalize what ought to be done. The group wants to verbally understand what is about to happen. The groups wants to know what will be done, how it will be done and why it can be expected to work.

*

I’ve been reading literature from the field of Science and Technology Studies. Practitioners of STS use ethnographic research methods to watch how science is actually done. What they see confirms what Thomas Kuhn also saw: Science tends to suppress much of the experience and behavior of scientists, and to emphasize the discoveries — not only in scientific writing, but also in accounts of how science is done. The histories of science are rewritten in such a way that progress to the present appears straight and steady.

Kuhn:

Textbooks, however, being pedagogic vehicles for the perpetuation of normal science, have to be rewritten in whole or in part whenever the language, problem-structure, or standards of normal science change. In short, they have to be rewritten in the aftermath of each scientific revolution, and, once rewritten, they inevitably disguise not only the role but the very existence of the revolutions that produced them. (Kuhn, Structure of Scientific Revolutions)

Latour:

On June 2, 1881, in the little village of Pouilly-le-Fort in Beauce, Louis Pasteur defeated a terrible disease of sheep and cows, called anthrax. A friend of Pasteur’s gives this account: “Pouilly-leFort is as famous today as any other battlefield. Monsieur Pasteur, a new Apollo, was not afraid to deliver oracles, more certain of success than that child of poetry would be. In a program laid out in advance, everything that was to happen was announced with a confidence that simply looked like audacity, for here the oracle was pronounced by science itself, that is to say, it was the expression of a long series of experiments, of which the unvarying constancy of the results proved with absolute certainty the truth of the law discovered” (Bouley: 1 883, p. 439). The strategy was conceived entirely in advance; Pasteur concocted it and had every detail figured out; it went according to plan, following a strict order of command from Pasteur to the sheep by way of his assistants and the caretakers. (Latour, The Pasteurization of France)

The cash value of this idea?

What we understand to be scientific is not actually how science is accomplished.

My position is that the same is true in nearly every sphere of human activity, and doubly so wherever creativity happens. This includes education, management, design, social research — basically area of life where people are especially maniacal about method and most aggressively impose processes, standards, protocols and norms of every kind on one another.

Here’s how it goes:

  1. New ideas are conceived in intuitive leaps.
  2. The leaped-to ideas are tested in some way or another, artificially or in actuality.
  3. The leaps that pass the test are considered leaps forward to a goal.
  4. The leap forward is then traced backwards and rationalized. Reason creeps bit by bit from the goal to the origin, and attempts to account for the distance traversed in an unbroken chain of explanations.
  5. Then cause and effect are reversed. The story of the leap is confabulated. It is retold as a story of a steady and rational creeping forward toward a goal.
  6. The story makes perfect sense, and is accepted as the true account of the success.
  7. The creeping story is then formalized into a method, and imposed as a norm.
  8. Further attempts at progress are evaluated against their similarity to the proven method.
  9. Those who have strong belief in the method and who follow it faithfully produce respectable but unspectacular results. Those who ignore the method and flaunt that fact win little institutional support. Those who play the method game, but who leave themselves intuitive freedom win the most success.

*

I’ve had the unnerving experience of being forced to improvise when method failed, and succeeding — but discovering after that methods were attributed to my success, and that nothing I could say would persuade those who saw method where there was none that my success was fortunate (and easily could have been otherwise) and that none of it had a thing to do with following method. Had my improvisation failed, there is no doubt in my mind it would have been blamed on my deviation from method.

*

I think most methods are sheer chickenshit (in the technical sense).

I think most successes are accomplished by what most people would call bullshit. “Eureka” moments. Apples hitting the head. Ideas in the shower.

The key is entirely in testing — to establish that the leap is a good one — and then in the rational creep backwards to account for why the idea makes sense — but NOT as the method for how it was accomplished!

*

People who refuse to leap out of methodological conscience are depriving themselves of the pleasure of creativity. They limit themselves to incremental innovation.

People who leap without testing the leap deprive their sponsors of reasonable assurance. There’s nothing wrong with jumping to conclusions. All creative conclusions — good and bad — are jumped to. The key is to test them before acting on them. Whether they turn out for the better or for the worse, any untested leap is reckless.

If you rationalize the successful  leaps, figure out what made the leap work, you might discover principles that can fuel future leaps, and you can also integrate the accomplishment into the organizations body of knowledge. There’s value in the creep backwards.

BUT: do not reverse cause and effect and require everyone to demonstrate how they will creep to success before they are permitted to move.

*

If you hate dumb puns stop reading now.

Continue reading Confabulated norms

Experience planning

The primary task of experience planning is to provide designers with precision inspiration for making long and efficacious intuitive leaps. The secondary task is to provide criteria of efficacy for this particular project, by which intuitive leaps can be evaluated. This efficacy will always involve one or more user segments, a brand, and a relationship between user and brand in a complex use context.

 

Design thinking

Design thinking, though slightly more expansive than typical management thinking, still remains within the horizons of utilitarianism. To put it in Hannah Arendt’s language, the designer type still falls within the category homo faber.

*

There’s doing what’s always done. Execution.

There’s thinking about doing what’s always done. Management.

There’s rethinking what’s always done in order to find a better way of doing. Design thinking.

There’s rethinking our thinking: how we think about what we do…

There’s rethinking ought: why we do what we do…

*

“There are so many days that have not yet broken”

A short-lived fashion from the turn of the millennium?

At the end of her 2000 article “Ethnography in the Field of Design” Christina Wasson issued some warnings:

Although ethnography enjoys a great deal of popularity in the design field at present, I want to close with a cautionary eye to the future. Observing a similar phenomenon in CSCW, Hughes et al. (1994:437) noted: “Ethnography is currently fashionable in CSCW, but if it is to survive this kind of attention then it is important that the method find an effective voice rather than remaining content with ephemeral celebrity.”

. . .

Ten years from now, will ethnography be regarded as a short-lived fashion from the turn of the millennium? Its staying power depends on its ability to accurately purvey a unique kind of useful information to designers. And while the details of design firms’ ethnographic practices may not be public, there is a widespread sense among anthropologists in the design community that the quality of these firms’ research varies widely. The popularity of the approach has led a number of design firms to claim they offer “ethnography” even though none of their employees has a degree in anthropology or a related discipline. Sometimes researchers trained in cognitive psychology adopt observational methods; sometimes designers themselves do the observation. Such design firms are not necessarily averse to hiring anthropologists; they may have been unable to find ones with an adequate knowledge of the private sector.

As a consequence, the concept of ethnography has become a “pale shadow of itself’ (Wasson n.d.). In its most emaciated form, the term is simply used to refer to a designer with a video camera. Even in somewhat richer versions, the term has become closely identified with the act of observing naturally occurring consumer behaviors. The need to analyze those behaviors and situate them in their cultural context is poorly understood, even though these activities are essential parts of developing a model of user experience that leads to targeted and far-reaching design conclusions. The anthropological apparatus that stands behind ethnography — the self-reflexivity of participant observation, the training in theory that enables fieldworkers to identify patterns — these are poorly understood in the design field. Indeed, the association between ethnography and anthropology is not widely known. The term “anthropology” is almost never heard. Even Chicago’s Institute of Design, whose faculty has a fairly sophisticated understanding of the topic, describes ethnographic observation merely as “a method borrowed from social science research” on its Web site (Institute of Design 1997).

The tendency for design firms to skimp on analysis is due in part to financial pressures. It can be hard to persuade clients to fund adequate labor time for researchers to develop well-grounded interpretations. Merely claiming to do ethnography costs little; actually conducting substantive anthropological research is much more expensive. Clients are also chronically in a hurry and press for immediate results. Nonetheless, my worry is that the design firms that skimp on analysis will tend to produce less interesting results. In the long run, this could lead to the perception that ethnography doesn’t have much to offer after all. If that should happen — and I certainly hope it does not — an opportunity for anthropologists to help construct the world around us will have been lost. Those of us who are active in the design field can address this issue in several ways. First of all, it is my hope that the mechanism of the market may actually be of use and that anthropologists can create positive publicity for themselves by doing good work on their projects. It seems possible, at least, that clients will realize, over time, that the findings of design firms engaging in richer forms of ethnography outshine the findings of other firms. E-Lab/ Sapient’s continued growth is a hopeful sign. . . .

 

Design and the social system

Talcot Parsons, from The Social System:

Reduced to the simplest possible terms, then, a social system consists in a plurality of individual actors interacting with each other in a situation which has at least a physical or environmental aspect, actors who are motivated in terms of a tendency to the “optimization of gratification” and whose relation to their situations, including each other, is defined and mediated in terms of a system of culturally structured and shared symbols.

This is practically an inventory of the elements uncovered in design research.

  • Actors (which Parsons divides into ego, equivalent to “the user” in UX parlance and alter, which UX treats as elements of social context.)
  • Social context (relationships between actors)
  • Physical context (environment)
  • Needs (a.k.a. “optimization of gratification”)
  • Behaviors (involvement with events and artifacts in physical context)
  • Interactions (involvement with actors in social context)
  • Mental models (the defining/mediating symbol structure)
  • Signs (I’m adding this one: affordances that link mental models with real-world events and artifacts)
  • Symbols (I’m adding this one, too: indicators of the value-significance of real-world events and artifacts)

 

Decision-making scenarios

Scenario 1 (thesis)

A: “Maybe this will work…”

B: “Before we commit the effort, can you explain how it will work, assuming it might, keeping in mind we have limited time and money?”

A: “I think so. Give me a day.”

B: “We don’t have a day to spare on something this speculative. Let’s come up something a little more baked.”

… and [eventually, inevitably]

B: “So, what are the best practices?”

Scenario 2 (antithesis)

A: “I have a hunch this will work. Let’s go with it.”

B: “Can you explain how it will work?”

A: “Trust my professional judgment. My talent, training, experience, [role, title, awards, track record, accomplishments, etc.] distinguish my hunches.”

Scenario 3 (synthesis)

A: “I have a hunch this might work. Hang on.” … “Whoa. It did work. Look at that.”

B: “How in the world did that work?”

A: “I don’t know. Let’s try to figure out why.”

Shhhhhhh

Here’s what I learned from the Pragmatists (mostly via Richard J. Bernstein, who has probably had a deeper and more practical impact on how I think, work and live than any other author I’ve read): An awful lot of what we do is done under the guidance of tacit know-how.

After we complete an action we are sometimes able to go back and account for what we did, describing the why, how and what of it — and sometimes our descriptions are even accurate. But to assume — as we nearly always do — that this sort of self-account is in some way identical to what brought these actions about or even what guided them after they began is an intellectual habit that only occasionally leads us to understanding. Many such self-accounts are only better-informed explanations of observed behaviors of oneself, not reports on the actual intellectual process that produced the behaviors.

To explain this essential thoughtlessness in terms of “unconscious thoughts” that guide our behavior as conscious ones supposedly do in lucid action is to use a superstitious shim-concept to maintaining this mental/physical cause-and-effect framework in the face of contrary evidence. I do believe in unconscious ideas that guide our thoughts and actions (in fact I’m attempting to expose one right here), but I do not think they take the form of undetected opinion or theories. Rather they take the form of intellectual habits. They’re moves we just make with our minds… tacitly. Often, we can find an “assumption” consequent to this habitual move and treat this assumption as causing it, but this is an example of the habit itself. It is not the assumption there is a cause that makes us look for the cause, it is the habitual way of approaching such problems that makes us look for an undetected opinion at the root of our behaviors. We don’t know what else to do. It’s all we know how to do.

*

I’m not saying all or even most behavior is tacit, but I do believe much of it is, and particularly when we are having positive experiences. We generally enjoy behaving instinctually, intuitively and habitually.

*

Problems arise mainly when one instinct or intuition or habit interferes with the movements of another. It is at these times we must look into what we are doing and see what is unchangeable, what is variable and what our options are in reconciling the whole tacit mess. The intellectual habit of mental-cause-physical-effect thinking is an example of such a situation. Behind a zillion little hassles that theoretically aren’t so big — no bigger than a mosquito buzzing about your ears — is the assumption that we can just insert verbal interruptions into our stream of mental instructions that govern our daily doings without harming these doings. As I’ve said before, I do think some temperaments operate this way (for instance, temperaments common among administrators and project managers), but for other temperaments such assumptions are at best wrong, and at worst lead to practices that interfere with their effectiveness.

Software design and business processes guided by this habit of thought tend to be sufficient for verbal thinkers accustomed to issuing themselves instructions and executing them, but clunky, graceless and obtrusive to those who need to immerse themselves in activity.

*

It is possible that the popular “thinkaloud” technique in design research is nothing more than a methodology founded on a leading question: “What were you thinking?” A better question would be: “Were you thinking?”

*

The upshot of all this: We need to learn to understand how the various forms of tacit know-how work, and how to research them, how to represent them in a way that does not instantly falsify them, and how to respond to them. And to add one more potentially controversial item to this list: how to distinguish consequential and valuable findings documentation versus mere thud-fodder which does nothing in the way of improving experiences, but only reinforces the psychological delusions of our times. If research can shed this inheritance of its academic legacy — that the proper output of research is necessarily a publication, rather than a direct adjustment of action — research can take a leaner, less obtrusively linear role in the design process.

The role of design researcher

In most places I’ve worked, design research is conducted primarily or exclusively by people playing a researcher role. The researcher’s job is to learn all about the users of a system a team is preparing to design, to document what they have learned and then to teach the design team what they need to know to design for these users. Often the information/experience architect(s) on the project will conduct the research then shift from the researcher role to a designer role. Often content and visual designers will (optionally) observe some of the sessions as well. But it is understood that in the end, it will be the research findings and the testimony of the researcher that will inform the design in its various dimensions (IA, visual, content, etc.).

It is time to question this view of research. When a design feels dead-on perfect and there’s something about it that is deeply satisfying or even moving, isn’t it normally the case that we find that rightness defiant of description? Don’t we end up saying “You just have to see it for yourself.” And when we want to introduce two friends, we might try to convey to them who the other person is by telling stories, giving background facts or making analogies, but in the end we want our friends to meet and interact and know for themselves. Something about design and people — and I would argue, the best part — is lost in descriptions.

My view is that allowing researchers and research documentation to intercede between users and designers serves as a filter. Only that which lends itself to language (and to the degree we try to be “objective”, to the kind of unexpressive and explicit language least suited to conveying the je ne sais quoi qualities that feed design rightness) can make it through this filter. In other words, design documentation, besides being half the cost of reseach not only provides little value, it subtracts value from the research.

What is needed is direct contact between designers and users, and this requires a shift in the role of researcher and in research documentation. The role of researcher would become much more of a facilitator role. The researcher’s job now is to 1) determine who the users are, 2) to ensure that research participants are representative users, which means their screening responsibilities are increased, 3) to create situations where designers can learn about users directly from the users themselves, not only explicitly but also tacitly, not only observationally but interactively, 4) to help the designers interpret what they have learned and to apply it appropriately to their designs.

In this approach, design documentation does not go away, but it does become less of the primary output of research, and more of a progress report about the research. The primary tangible output of the research should be design prototypes to test with users, to validate both the explicit and tacit understandings developed by the design team. But the real result of research is the understanding itself, which will enable the team to produce artifacts that will be indescribably right, seeing that this rightness has been conveyed directly to the team, not forced through the inadequate medium of description.

Vision management

To be assigned responsibility for something is almost synonymous with taking care of all the details of some work activity or work product. But rarely is anyone assigned responsibility for maintaining the vision of the whole in the execution of the parts.

A management truism applies: “If nobody is responsible for getting a job done, it won’t get done.”

*

If you suggest that vision needs to be managed apart from the details many people will dismiss the thought on the grounds that once you’ve conceived an idea (in the form of a strategy or a concept), and developed a plan to execute it, the whole is contained in the details.

This is untrue.

It only seems that way because the majority of businesspeople are intellectually blind to wholeness. It isn’t that they can’t feel the difference between a whole and a fragmented mess — it’s just that they don’t know how to think about the problem and prefer to ignore it. We let wholes slide, because it’s hard to bust someone for neglecting a whole. It feels very… subjective. Parts are objective, so that’s where we focus.

But ignoring wholes is what makes so many companies competent but mediocre.

*

Philosophies have practical consequences, even when we are not aware we hold any philosophy at all. As Bob Dylan said: “It might be the devil / or it might be the Lord / but you’ve gotta serve somebody.” Actually, it is especially when we are unaware it that a philosophy’s influence is strongest, determining our thoughts, perceptions and action.

One philosophy 95% of people in the modern world believe without knowing it, which they have unconsciously absorbed through cultural osmosis and accepted unquestioningly, is atomism.

According to atomism, wholes are made entirely out of parts. Once all the parts are accounted for, the whole is accounted for as well. In other words, wholes are reducible to parts.

Holism asserts that wholes have an existence independent of their particular constitution (of parts). Some holists say that wholes are what give meaning to parts, and that parts deprived of the context of a whole are inconceivable. Reductionistic holists go as far as to claim that all we have is wholes which have been artificially or arbitrarily divided up into parts.

I’m against reductionism on principle. I think wholes have one kind of being, and parts have another kind of being, and that human beings find life most satisfying when wholes and parts are made to converge.

And my philosophy has practical consequences: wholes need management as much as parts do. And when you do not explicitly manage a wholes the parts will overpower, degrade and smother the whole.

This happens to products, to initiatives, and to organizations.

We forget wholes, mostly because we don’t understand what they are and how they work.

*

Inevitably and automatically, if allowed to develop by their own logic, parts diverge from the whole.

Parts tend to work themselves out according to the most local conditions, governed more by expedience, habit and myopia than by the guidance of vision. This type of localized logic is made of very crude forces and very tangible considerations.

Envisaged wholes are more fragile, at least at the beginning, before they are firmly established. They must be protected from the roughness of localized logic, like as we fence off sprouts and saplings until they’ve established themselves and no longer need protection.

Envisaged wholes (especially unprecedented wholes) are vulnerable in three specific ways. They are essentially inchoate, elusive, ephemeral .

  1. Envisaged wholes are essentially inchoate. — We tend to think of vision as being the envisioning of a whole, a detailed picturing of some possible reality. That is not how it happens. Vision is sensing a possibility. Some of the possibility is given in broad outline, and some of it is given in arbitrary detail, but most of it is simply latent in a situation, there but inaccessible to the imagination. As the situation develops under guidance of the vision, the development is recognized as conforming or deviating from the vision. But what is strange is that the vision itself is affected by the recognition. The vision understands itself, reflected in the concrete attempts to actualize it, in a dialogical process of revelation. This is why visions are not directly translatable into plans. The plan must accommodate and support the development of the vision, or it is only a recipe for sterility.
  2. Envisaged wholes are elusive. — While virtually all people are capable of recognizing and categorizing objects, and virtually every professional is capable of grasping processes and plans, relatively few are able to understand or conceive concepts, even after they have been clarified and articulated. An envisaged whole gains concreteness, clarity and general accessibility in the course of its development, and as it does it comes into view of more and more people. In its early stages, though, the fact of its existence, much less its nature will be far from obvious, and completely beyond the grasp of most people. Those with firsthand experience with vision know this process. Those who don’t either operate by faith and support the process or they undermine it, or they create conditions where vision doesn’t even happen. (In many organization, the wholes are determined solely by leadership; but leadership is earned through success in managing details. The result: the only people able to earn the right to set vision are precisely the ones with absolutely no awareness of vision. They try to provide their organizations with “vision”, but all they know how to come up with are ambitions, metrics, and plans to accomplish what’s been done before.)
  3. Envisaged wholes are ephemeral. — Because of how they are known, envisaged wholes are very easily corrupted and forgotten. They are revealed in dialogue with concrete actualization. The vision tries to respond to the actualization. If the actualization is not responsive to the vision and moves away from it far enough, the vision will lose not only its hold on the process, it will get caught up in the localized logic of the development and lose itself altogether. This is what is meant by getting “too close to the situation”. The vision holder must maintain the right balance of contact with the situation — close enough to guide it, but far enough from it to see when the development has begun to go off-track. When nobody is permitted the distance, and everyone is required to roll up their sleeves and get mired in the details, the vision’s chances of survival are nil. The problem is not with the vision, nor with the visionary, but with the absence of conditions necessary for maintaining vision.

*

The captain of a ship, after charting the ship’s course and pointing it in the right direction, went below deck and grabbed an oar.

Gadamer on dialogue

Reposting from my professional blog, Synetic Brand

This passage gets very close to the crux of synetic brand:

When we try to examine the hermeneutical phenomenon through the model of conversation between two persons, the chief thing that these apparently so different situations — understanding a text [NOTE: or a design] and reaching an understanding in a conversation — have in common is that both are concerned with a subject matter that is placed before them. Just as each interlocutor is trying to reach agreement on some subject with his partner, so also the interpreter [ / user] is trying to understand what the text [ / design] is saying. This understanding of the subject matter must take the form of language. It is not that the understanding is subsequently put into words; rather, the way understanding occurs — whether in the case of a text or a dialogue with another person who raises an issue with us — is the coming-into-language of the thing itself. Thus we will first consider the structure of dialogue proper, in order to specify the character of that other form of dialogue that is the understanding of texts. Whereas up to now we have framed the constitutive significance of the question for the hermeneutical phenomenon in terms of conversation, we must now demonstrate the linguisticality of dialogue, which is the basis of the question, as an element of hermeneutics.

Our first point is that the language in which something comes to speak is not a possession at the disposal of one or the other of the interlocutors. Every conversation presupposes a common language, or better, creates a common language. Something is placed in the center, as the Greeks say, which the partners in dialogue both share, and concerning which they can exchange ideas with one another. Hence reaching an understanding on the subject matter of a conversation necessarily means that a common language must first be worked out in the conversation. This is not an external matter of simply adjusting our tools; nor is it even right to say that the partners adapt themselves to one another but, rather, in a successful conversation they both come under the influence of the truth of the object and are thus bound to one another in a new community. To reach an understanding [synesis] in a dialogue is not merely a matter of putting oneself forward and successfully asserting one’s own point of view, but being transformed into a communion in which we do not remain what we were.

*

Synetic branding is neither organization-centric, nor is it user-centric.

Synetic branding is relationship-centric, which means all parties, through dialogue, come to a mutually transformative  shared understanding.

Synetic branding is the method of generating dialogue between an organization and those who participate in the organization (stakeholders). “To reach [synesis] in a dialogue is not merely a matter of putting oneself forward and successfully asserting one’s own point of view, but being transformed into a communion in which we do not remain what we were.”

Synetic branding sees brand neither as the possession of an organization, nor as the image of the organization in the minds of customers, etc. Neither is exactly wrong, but neither is nearly right enough.

Synetic branding is participatory, which means that brand is a whole that exceeds each of its parts, which both influences and is influenced by its parts. A participant in a synetic brand, whether he participates as an executive, an employee, a shareholder, a partner or a customer, sees by way of the brand’s vision, but to some degree changes the brand’s vision through his participation. The object of this vision is the field with which an organization concerns itself and its offerings within that field, but the vision extends far beyond the object, and influences aesthetic (thus brand identity systems) and how related offerings are perceived (thus brand equity).

Synetic branding means taking responsibility for cultivating mutual understanding among all who participate and recognizing that the essence of a brand is precisely the mutuality of the understanding. Everything, including all the things people commonly mistake for brand itself, such as the image of the company in the minds of whoever), follows from this. Failure to recognize this fact is what has made so many companies into decorated commodity clones. They see everything the same way, manage themselves the same way, follow tweaked and relabeled versions of identical processes, make the same kinds of trade-offs and basically aim for the same ideal as everyone else.

Synetic brand uses large-scale dialogue between an organization’s participants to discover new unifying perspectives on an organization’s offerings that otherwise would remain invisible to everyone.

These perspectives open new questions and new possibilities in the organization’s field of concern. This is the foundation of meaningful innovation and sustainable competitive advantage.

The hermeneutical-rhetorical circle

As a user experience practitioner, it is interesting to me that the hermeneutical circle (the movement between whole and part that characterizes the process of understanding) originated in ancient rhetoric. The privilege of my profession is that we get to stand on both sides of meaning, as understanders (in the mode of researchers) and as creators of things to be understood (in the mode of designers), and best of all, we get to iteratively connect the two modes. (I’m picturing the infinity symbol: we research understandings, we design things to be understood, we research understandings of our designs, we redesign… etc. )

It seems everything we do in user experience wants to be iterative. (* See note.) I don’t think this is an accident. I think it is because we are in the understanding business, and iterativity is the form of understanding.

*

An idea to try on: user experience strategy/design as a species of rhetoric. Pan-sensory, interactive rhetoric. (I’ve been enjoying the perversity of using words revaluated by Gadamer to express benevolent thoughts as villainously as possible. This one falls short of the last example of the pattern, characterizing brand as “prejudice design”. )

*

In his wonderful book Beyond Objectivism and Relativism: Science, Hermeneutics and Praxis Richard J. Bernstein made a very interesting criticism of Gadamer: that Gadamer did a good job of outlining a theory of hermeneutics, but in regard to practice he left us hanging.

My view is that experience design can be a practical extension of Gadamer’s thought, and in fact is following a semi-conscious trajectory toward this point. It’s always exciting to find new ways to integrate my philosophical mornings and my professional days.

—-

(* Note: Conversely, much of the friction we experience in the world of business seems to center around the flattening of circularities. Business likes predictability, so it likes nice straight lines. Non-linearity is innately unpredictable.)

Ways To Diagram Three Entities

As promised, my visual tantrum on the subject of imprecise use of visual depictions of triadic relationships:

triadic-relationships

Top, l to r: three dimensions; three interconnected parts of a system; three ratios
Middle, l to r: overlap of three domains; three steps; two means supporting an end
Bottom, l to r: three elements in a list; three hierarchical tiers; three nested domains