Category Archives: Design

Slurpy, mergy, touchy-feely notions of interpersonal being

Wow, this post really sprawled out. It hits a lot of my enduring interests. I’m not sure it is suitable for reading. It might just be a personal journal entry written to myself. Feel free to eavesdrop if you wish, but I cannot promise it will make sense or yield any value.

*

I listened to a fascinating Radio Open Source podcast on Hannah Arendt’s conception of evil, which ended with a wonderful discussion on empathy.

Jerome Kohn: Empathy is a fancy word or fancy theory that she argued passionately against. First of all she thought it was an impossible notion in the sense that it really means feeling what someone else feels. Sympathy, fellow feeling, is another thing. But empathy is the claim that you can actually feel what someone else is feeling. And for that Arendt found no evidence whatsoever. One could say it’s even the opposite of her notion of thinking from another person’s point of view. What you have to be able to do is to see a given issue from different points of view, to make it real. And then through those different points of view, with your own eyes, you don’t feel what the other person is feeling, you see what he is seeing through your own eyes, and then you can make a judgement. The more people you can take into consideration in this enlarged mentality, that actually is the foundation of reality for Arendt, the more valid your judgement will be.

Elisabeth Young-Bruehl: Jerry’s exactly right. Hannah Arendt was always opposed to these slurpy, mergy, touchy-feely notions about what binds people to each other. And she felt very keenly that what really binds one person to another is a commitment to try to see the world from that person’s point of view with your own eyes. Not to subscribe to their point of view or to merge with their point of view, but to be able to walk around and see what the world looks like from where they’re standing. But looking at it with your own eyes, so that you can then, as it were, discuss it with them. Not merge with them in some way, but discuss it with them. She was all about discussion. Not empathy in that sentimental way.

Christopher Lydon (host): And yet, well, there are distinctions without huge differences in some way. To put oneself in another’s mind is the beginning of something important.

EYB: To think that you can put yourself in another’s mind in the beginning of a terrible arrogance which has tremendous consequences. It’s a difference with great consequences. People who think they that they can know what another person thinks or feel what another person feels are narcissistic.

CL: Well, ok, I don’t want to make a philosophical or an endless argument about it. Isn’t it the incapacity and the lack of interest in that perspective precisely what she found at the core of Eichmann’s banality and Eichmann’s evil, really?

JK: Well, no, it was his thoughtlessness, his inability to think from any other point of view but his own.

EYB: Exactly. And these are very important distinctions.

This exchange is especially interesting to me for three reasons.

First: as a Human Centered Design researcher/strategist/designer, I am constantly telling people that I am in the “empathy business.” However, I have long been uncomfortable with the characterization of what I do as “empathy”. To characterize understanding another person subjectively as primarily a matter of experiencing how they feel misses the mark in a very modernistic way. (em- ‘in’ + pathos ‘feeling’). While feelings are important to what I do, they are not the primary focus. I would prefer to characterize my work as concrete hermeneutics, but words like that do not fly in the flatlands of business where thinking lags a minimum of three philosophical generations. So, I’ve adopted “empathy” and accepted the inevitable misconceptions that go with it, because that’s what it takes to be understood even appoximately by most people.

It is hardly surprising that I see things similarly to to Young-Bruehl and Kohn, because I belong to their tradition. Heidegger taught Arendt and Gadamer who both taught my favorite thinker Richard J. Bernstein. A Clifford Geertz quote from Bernstein’s Beyond Objectivism and Relativism has stayed with me as an anchor for my understanding of what a good human centered designer does.

Second, I think that when we see things this way, we tend to treat emotionally-oriented people who are very sensitive and sentimentally responsive to people around them as having some kind of monopoly on human understanding. In my experience, there are multiple stages of coming to understanding of another person, and a talent for sensing and responding does not always correspond with a talent for intuiting other people’s worldviews, nor an ability to think, speak and create from or into another worldview. It takes a fairly vast range of talents to act pluralistically with any degree of effectiveness.

I think a lot of the political problems we are experiencing today result from shoddy and retrogressive philosophical conceptions of alterity (“otherness”), which still see understanding of other people as very literally empathic, as a matter of emotions. According to this view, to know what is going on with another person, we must ourselves have had the experiences and emotions that other person has had. In an effort to understand and to demonstrate our understanding we must induce emotions similar to theirs. Two consequences follow: 1) The one who understands must try to produce the right emotions, and this production of emotion is the demonstration of understanding, which leads to some fairly repulsive public displays of political sentimentality. 2) The one who is understood is put in a position of judging the authenticity of those emotional displays, which is more or less being given the role of arbitrary judge. And if the feelings of the understood is viewed as the central datum or a special kind of insight (being “woke”) into a political situation (typically gauging the degree of prejudicial unfairness, its impact on those victimized by that prejudice and what is required to rectify that unfairness) this amounts to extreme epistemological privilege. Only the victim of prejudice has access to the reality of the situation, and those who are not the victims are incapable of perceiving how they participate in the perpetration, so to use the charming the formulation of today’s hyper-just youngsters, it is their job to STFU and to accept the truth dictated to them. It never occurs to anyone within the power hierarchy of wokeness that there’s anything superior to all this illiberal mess to awaken to. There are philosophical worldviews that are more thorough, more comprehensive and more expansive than the dwarfish ideology of the popular left, but for all the reasons they are eager to point out to anyone who defies them, they are entirely incapable of seeing beyond the motivated reasoning of their own class interests. (This does not mean I think the popular right is any better. It is not. We are in a Weimaresque situation of resentful evil left idiocy vs paranoid evil right idiocy, with the reasonable voices shoved to the margins.)

Third, I’ve found myself misunderstood by many close friends on how I view relationships, and Elisabeth Young-Bruehl did a great job of capturing how people think I see them: a “slurpy, mergy, touchy-feely notion about what binds people to each other.” I think the misunderstanding is rooted in this same conception of human understanding being primarily an emotional phenomenon. When my own ideal of marriage or of friendship is strained through the filter of today’s left worldview, it looks like a mystical merging of souls that arouses (and should arouse!) suspicions of domination and anxieties around loss of self. But any attempt I make to try to explain the difference between what I have in mind looks like, well, an attempt at philosophical domination and a threat to the selfhood of whoever is foolish enough to take it seriously. Who am I to tell someone something they don’t already know? And anyway, it smells very cultish to listen to someone claiming to know better than the public what is true and right. So, by the circular logic of the popular worldview of the left, it is superior to form one’s own individual opinion (never mind that this opinion on opinions is a product of an unexamined and manifestly broken worldview.)

Obviously, this means extreme alienation for anyone who adopts a sharply differing worldview that affirms the importance of collaboratively developing shared understandings with those around them. In an environment of extreme ideological conformity (with brutal social consequences for infractions) that exalts above all the importance of intellectual independence — but strictly within its own confined philosophical horizon — a philosophy of interdependence, of collaborative development of the very concepts one uses to form one’s opinions, and exalting a togetherness in shared worldview is marked for expulsion.

Anyway, what I really have in mind when I imagine ideal personal connections is, once again, that ideal sketched out by Bernstein, captured so well by Geertz, which I will now go ahead and re-re-quote.

…Accounts of other peoples’ subjectivities can be built up without recourse to pretensions to more-than-normal capacities for ego effacement and fellow feeling. Normal capacities in these respects are, of course, essential, as is their cultivation, if we expect people to tolerate our intrusions into their lives at all and accept us as persons worth talking to. I am certainly not arguing for insensitivity here, and hope I have not demonstrated it. But whatever accurate or half-accurate sense one gets of what one’s informants are, as the phrase goes, really like does not come from the experience of that acceptance as such, which is part of one’s own biography, not of theirs. It comes from the ability to construe their modes of expression, what I would call their symbol systems, which such an acceptance allows one to work toward developing. Understanding the form and pressure of, to use the dangerous word one more time, natives’ inner lives is more like grasping a proverb, catching an allusion, seeing a joke — or, as I have suggested, reading a poem — than it is like achieving communion.

And now I will quote myself:

“Understanding the form and pressure of, to use the dangerous word one more time, natives’ inner lives is more like grasping a proverb, catching an allusion, seeing a joke — or, as I have suggested, reading a poem…” or knowing how to design for them.

A design that makes sense, which is easy to interact with and which is a valuable and welcome addition to a person’s life is proof that this person is understood, that the designer cared enough to develop an understanding and to apply that understanding to that person’s benefit.

A good design shares the essential qualities of a good gift.

The kind of merging I have in mind is just sharing a worldview and using it together to live together, what Husserl (Heidegger’s teacher) called a “lifeworld“. I’ve called the process “enworldment”.

The merging aspect of this ideal enters the stage through my belief (shared, I believe by Process Theology) that souls are universe-sized. The pragmatic consequence of what one means when one says “everything” is the scope and density of one’s soul. To enworld* with another is to bring two “everythings” into harmonious relationship, and to begin to function more like a culture than two isolated individuals within this isolating milieu so many of us, without ever choosing, without even knowing we had a choice, inhabit as prisoners of our own destitute freedom.

(Note: that “enworld” link above is a pretty old post, and I’m not sure right now how much of it I still agree with. It makes me want to engage my old self in dialogue and try to discover how much common ground we have. How enworlded am I with my 9-years-ago self?)

Ancestors and siblings of process thought

While I’m scanning passages from C. Robert Mesle’s Process-Relational Philosophy, here are two more that inspired me.

The first passage appeals to my designer consciousness:

Descartes was wrong in his basic dualism. The world is not composed of substances or of two kinds of substances. There is, however, what David Ray Griffin calls an “organizational duality.” Descartes was correct that rocks and chairs and other large physical objects do not have minds, while humans do. In Whiteheadian terms, rocks are simply not organized to produce any level of experience above that of the molecules that form them. In living organisms, however, there can be varying degrees to which the organism is structured to give rise to a single series of feelings that can function to direct the organism as a whole. We can see fairly clearly that at least higher animals like chimps and dogs have a psyche (mind or soul) chat is in many ways like our own. This psyche draws experience from the whole body (with varying degrees of directness and clarity), often crossing a threshold into some degree of consciousness, and is able in turn to use that awareness to direct the organism toward actions that help it to survive and achieve some enjoyment of life. The self, or soul, then is not something separate from the body. It arises out of the life of the body, especially the brain.

The mind/soul/psyche is the flow of the body’s experience. Yet your body produces a unique mind that is also able to have experiences reaching beyond those derived directly from the body. We can think about philosophy, love, mathematics, or death in abstract conceptual ways that are not merely physical perceptions. Without the body, there would be no such flow of experience, but with a properly organized body, there can be a flow of experience that moves beyond purely bodily sensation. Furthermore, your mind can clearly interact with your body so that you can move, play, eat, hug, and work. There is a kind of dualism here in that the mind is not only the body but it is, in Griffin’s phrase, a hierarchical dualism rather than a metaphysical one. There are not two kinds of substances — minds and bodies. There is one kind of reality — experience. But experience has both its physical and mental aspects.

To my ears, this is a beautiful dovetail joint waiting to be fitted to extended cognition. “Rocks are simply not organized to produce any level of experience above that of the molecules that form them” but if a human organizes those rocks in particular ways, for instance drilling and shaping them into abacus beads, or melting them down to manufacture silicon chips, those rocks can be channeled into extended cognitive systems which in a very real way become extensions of our individual and collective minds. It is ironic to me that even at this exact instance, in typing out this sentence, a thought is forming before my eyes with the help of rocks reorganized as silicon chips which are participating in the “having” of this very thought. And if anyone is reading this and understanding it, my thought, multi-encoded, transmitted, decoded and interpreted by your own intelligence — rocks have helped organize this event of understanding! Humans help organize more and more of the “inanimate” world into participants of experience.

And now we are wading out into the territory developed by Actor-Network Theory, which asks, expecting intricately branching detailed answers: How do humans and non-humans assemble themselves into societies? I think the commonality within these harmoniously similar thought programs is their common rootedness in Pragmatism. It is no accident that Richard J. Bernstein saw pragmatism as a constructive way out of  the unbridled skeptical deconstruction of post-modernism, and that Whitehead, who acknowledged a debt to Pragmatism, is said to offer a constructive postmodernism.

The second passage appeals to my newly Jewish hermeneutic consciousness. This is a quote by Whitehead:

The true method of discovery is like the flight of an aeroplane. It starts from the ground of particular observation; it makes a flight in the thin air of imaginative generalization; and it again lands for renewed observation rendered acute by rational interpretation.

This, of course, is a description of the hermeneutic circle, the concept that we understand parts in terms of the concepts by which we understand them, but that our concepts are often modified (or replaced) in the effort to subsume recalcitrant parts. We tack between focusing on the details and (to the degree we are reflective) revisiting how we are conceptualizing those details. These are the two altitudes Whitehead mentions: an on-the-ground investigation of detail and a sky-view survey of how all those details fit together.

This is an ancient analogy. The Egyptians made the ibis, an animal with a head like a snake (the lowest animal) and the body of a bird (the highest animal) the animal of Thoth, their god of writing, the Egyptian analogue to Hermes. Nietzsche also used this image in Thus Spoke Zarathustra, and that is where I first encountered it.

An eagle soared through the sky in wide circles, and on him there hung a serpent, not like prey but like a friend: for she kept herself wound around his neck. “These are my animals,” said Zarathustra and was happy in his heart. “The proudest animal under the sun and the wisest animal under the sun — they have gone out on a search. They want to determine whether Zarathustra is still alive. Verily, do I still live? I found life more dangerous among men than among animals; on dangerous paths walks Zarathustra. May my animals lead me!” When Zarathustra had said this he recalled the words of the saint in the forest, sighed, and spoke thus to his heart: “That I might be wiser! That I might be wise through and through like my serpent! But there I ask the impossible: so I ask my pride that it always go along with my wisdom. And when my wisdom leaves me one day — alas, it loves to fly away — let my pride then fly with my folly.”

And I have seen the Star of David as an image of the synthesis of atomistic ground-up and holistic sky-down understandings. And this is one reason I chose Nachshon (“snakebird”) as my Hebrew name when I converted to Judaism.

*

(Eventually, I’ll have to try to connect process thought with my extremely simplistic and possibly distorted understanding of chaos theory. Eventually.)

Rude tools

In my last post I promised that my next post would be “a theoretical tantrum on the ethics around that miserable love triangle between developer, tool and user.” and that I thought the issue of “‘ownership’ of software is an unrecognized moral crisis of our times.”

This is that post.

My belief in the importance of resolving the issue of tool ownership hinges on a theory which I experience as true: Extended Cognition. According to wikipedia “Extended cognition is the view that mental processes and mind extend beyond the body to include aspects of the environment in which an organism is embedded and the organism’s interaction with that environment. Cognition goes beyond the manipulation of symbols to include the emergence of order and structure evolving from active engagement with the world.” The example offered to me by my friend Zach, who introduced this concept to me, was of doing addition with your fingers. Viewed through the lens of Extended Cognition the movement of the hand is part of the thinking that produces the result.

Where I experience this as most true is when I use tools that I’ve learned to use skillfully. That is, I’ve mastered them so fully that they more or less disappear as I use them. If we know how to use a pen, we no more need to think about using that pen while we are using it than we need to think about our hand. It becomes part of us, and it allows us to focus our attention on the thing we are doing, and to become absorbed in our activity.

This is true also of software tools — or at least well-designed ones. If a tool is well-designed, I am able to just concentrate on the content of my activity, without the need to split my attention thinking about use of the tool. Often, I can’t even explain how I use a tool. My hands know what to do, and my verbal mind isn’t in the loop. What I know can only be demonstrated.

How many times have you told someone you can’t really explain how to do something on their computer of phone, but if you can just get your hands on the device you can show them? Sometimes it’s not enough to see the screen. Only actual doing of the interaction releases the know-how.

This kind of knowing that seems to exist just in the body is known as tacit knowledge. I like to call the part of UI design that harnesses this tacit knowledge “the tacit layer.” Back when designers still liked to talk about “intuitive design” this awareness was much more prevalent. But I think this way of thinking about design is in precipitous decline. Now, intuitive means little more than figure-it-outability.

Tools used largely in a tacit mode to develop ideas become an extensions of the user’s own being. To change a tool so that it stops functioning this way changes a person’s being. It literally prevents a person from thinking — it robs them of a piece of their own mind.

When we look at software in that light, doesn’t it seem like a norm that a company owns software, and that users pay a licensing fee for the right to use it offers far too little protection to the user? Shouldn’t users have more control over what is done to them?

I’m not suggesting a change in IP law or anything like that. I do think the software industry needs some different licensing arrangements, though. I’d like to see something like a user-developer covenant: “If you, the user, invite this tool into your life, adopt it and invest the effort to master it, you can trust us, the developer, to safeguard your investment by minimizing design changes that break the tacit layer, create distractions and force unwanted relearning. We understand that your concern is with what you are doing, not with the tool we offer you.”

 

 

Taking away my tools

Over the last decade and a half I’ve relied on four tools for making my thoughts.

Of these four, two have broken in the last couple of years: Adobe Illustrator and WordPress. These two tools have undergone frequent deep UI changes, which have obsoleted my skills. When I try to use them now, I’m too busy thinking about how to use the UIs to concentrate on the ideas I’m attempting to develop.

Yesterday, I found out my hosting service is upgrading their server and it is going to bring down my Wiki, my core tool for organizing what I learn in my reading. I chose to host my own Wiki so I could control this key tool and not be subject to the whims of developers, but now they’ve caught up with me and ruined this tool, too. Now I only have one thinking tool left intact, and that is my own philosophy.

It’s funny; this feeling of vulnerability is exactly what led me to philosophy in the first place. When I was a kid living at home, my father was fond of informing me that I owned nothing — that he could take any of my possessions away any time he wanted to. My parents were always threatening my sister with taking away her horse if she didn’t toe the line. I saw clearly that I could not tolerate that kind of exposure. I figured the only thing I had that could not be taken away were my ideas, so that was what I made my treasure.

Stupidly, I have relied on tools under other people’s control to help me shape and craft my ideas, and when those people decide to exercise their whims to disrupt my ability to use these tools, my most precious capabilities — the things that help me be who I am — are jeopardized.

I’m halfway considering throwing out all my software tools and re-training myself to use just pen and paper to work through my ideas. While I’m at it maybe I’ll get rid of all my books and kick my awful caffeine habit. I can’t trust other people to even understand what I need, much less to actually respect the legitimacy of those needs, much less to act in a way that doesn’t harm me. And supporting those needs is entirely out of the question. What I need seems unreasonable to other people. Nevertheless, I need what I need, and that means I must reduce by dependency as well as my exposure. I think this is the root reason so many thinkers are ascetic.

My next post is going to be a theoretical tantrum on the ethics around that miserable love triangle between developer, tool and user. I am convinced that the “ownership” of software is an unrecognized moral crisis of our times.

Autumn 2011, when the canary died

The reason I have been so upset about the state of design is that in 2011 — autumn of 2011, to be exact — all the liberal progress I’d been seeing in my field suddenly reversed. Three things happened:

  1. Steve Jobs died, (October 5, 2011), and even worse, Isaacson’s biography of him was published (October  24, 2011).
  2. Lean Startup was published (September 13, 2011).
  3. Front-end frameworks, Bootstrap (August 19, 2011 and Foundation (September 2011), hit the development world, enabling developers to make visually passable UIs without assistance from UI designers.

All three of these factors marginalized design in crucial ways that have gradually brought the digital water we users swim in to a frog-killing rolling boil.

This helps explain why our digital lives are in pleasureless turmoil. Remember back when we would count the hours to the next Apple product release, and get excited when we saw that an upgrade was available to the software tools of choice? Now it all makes us uneasy, because it means yet more disruption where what we really want is stability. New features are more likely to make things harder for us than improve our lives.

This is not an inevitable effect of the world getting more complex. It is a direct effect of design’s marginalization. Engineering-minded people now run the show — folks obsessed with the Thing they make, as opposed to the experiences real-life people have interacting with things in real-life situations. The latter is what designers are all about, and it is why we use the language of “experience” when speaking about our practices, all of which are focused on improving experiences people have. For designers the Thing is only the means to an end, which is people’s experience of it.

But now the language of design has been appropriated and emptied. Engineers call their Things “Experiences”. When they hack together a front-end using a front-end framework, they call this “designing the User Experience”.

People who lack understanding of the radical paradigm shift (meant literally, in the Kuhnian sense) at the root of HCD — a root that could not be more at odds with the objectivist Industrial Age paradigm — are blind to the relapse to which we’ve succumbed. They never made the shift anyway, and these new retro-practices make more sense to the engineering mindset.

And sadly, this relapse has spread into politics, hitting both left and right extremes of the political spectrum, each feeding on conflict with the other, and is rapidly closing in on the center. We have the brainless sophistication of children trained by disillusioned Marxists to perceive the world in the terms of racist, sexist and other identitarian sociologies (ironically called “hermeneutics” of this and that) facing off against aggressively anti-intellectual thugs. Liberalism is now widely disparaged and declared vapid, naive and obsolete by the very people who are blind to what Liberalism is, how it is done and why it is so important.

Hopefully, soon everyone will have known all these things I’m saying all along, and I will retroactively have not been the only one freaking out about the loss of liberal democracy, the loss of design and seeing very vividly the connection between the two. Until then, stuck in this present, I am isolated in my own obsessive interests and worries.

 

Pamphleteerism

Over the last year I’ve been equipping myself to make pamphlets. I’ve purchased several reams of beautiful French Paper in cover and heavy text weights, waxed linen bookbinder thread, needles, and awls and a bone folder. I’ve figured out how to use Adobe InDesign with my printer (which prints 2-sided) to create booklets in signature format ready for binding. I’ve practiced and refined my booklet sewing technique constructing and revising Shabbat prayer booklets.

I think I am going to force myself to work differently in the coming months. I think I’m going to steal from the product development industry (my greatest, most beloved, most intensely detested frenemy, who has nourished me with so many unavoidable crises, who has dragged me through so much dark despair into so many enlightenments). What I intend to steal comes directly from the single most painful trend of the last decade. I intend to force myself to work in “sprints”.

Working in pamphlet sprints, I will write with the intention of always creating a printed pamphlet by the end of the session. I am also going to get rid of this notion of getting everything I’ve learned into a single book. I’m going to get it all out in microcosmic bursts of various genre.

Here are the pamphlets I have planned so far:

  • Geometric Parables. This is a book of diagrams I’ve been drawing and redrawing, interpreting and reinterpreting over the last 15 years. These images guide my best thoughts. When I think, often I am just growing the consequences of particular problems onto these frameworks, as if they were trellises. This will be an obscure little book, consisting of diagrams and meditations in compact verse. Its purpose is not explanation, and it is unlikely to make sense by itself. Its purpose is prayer: recollecting what memory cannot grasp. I will be flirting with idolatry making this pamphlet the way I want it made.
  • The Ten-Thousand Everythings. This could end up being a book that explains Geometric Parables. I’ve accumulated a large number of aphoristic scraps that fit together into a cohesive philosophical perspective. I want to attempt to demonstrate my way of thinking by exploring some key domains, especially ethics, ontology and religion. This will be my idea dump. I’m going to try to force myself to be more relaxed and prosaic writing and rewriting it.
  • Syllabus Listicalis. This idea came to life yesterday, when I just started listing out the most consequential points where I disagree with conventional wisdom. Few people understand the extent to which my thinking has diverged from the norms of everyday thinking, especially at the most crucial life-shaping points. This has left me in a place where at best I agree with others on details, but not for the reasons people tend to assume, which cannot be explained within contemporary customs of polite conversation. I doubt I’ll try to explain anything in Syllabus Listicalis. It will be a bare list of instructive disagreements, maybe a negative image of The Ten-Thousand Everythings.
  • Interface: This will be a more or less explicit book about the myriad lessons I’ve learned oscillating between human-centered design and philosophical reflection, and how these insights have constellated around what I think is an important new way of thinking about reality. I believe many designers have intuited the importance of this new perspective as they have developed and applied its methods to an expanding sphere of problems. But so far, I have seen no attempt to articulate the perspective itself and  account for its importance.

In addition, I may start typesetting my better blog posts. Maybe I’ll make a series called Anomalogues. But first, I’m going to make some editions of the pamphlets I’ve listed above.

“The many faces of research”

I just realized I never re-posted my October 2010 article summarizing James Spradley’s incredibly cool way of defining different types of research — by the role of the participant vis a vis the researcher.

Here’s the text:

Anyone who has ever commissioned, designed, conducted research will find these common but thorny questions all too familiar:

  • “What is this research going to give us that we can’t get from analytics and iterative design?”
  • “Don’t you need to ask all your interviewees the same set of questions so you can compare their answers?”
  • “Can you quantify these findings?”
  • And with qualitative research, the dreaded: “That’s an awfully small sample. Are these findings statistically significant?”

These questions can be difficult to answer clearly, succinctly and definitively. Wouldn’t it be helpful to have some kind of framework or model to help people understand how the various kinds of research (especially qualitative and quantitative fit together) to provide an organization what it needs to effectively engage and serve their customers?

James Spradley in The Ethnographic Interview provides such a framework. His approach is the identification of four different roles a research participant can play, each with a different relationship between researcher and participant and each producing a different kind of finding:

  • Informant – In ethnography, a participant is related to as an informant. Informants are “engaged by the ethnographer to speak in their own language or dialect”, providing “a model for the ethnographer to imitate” so that “the ethnographer can learn to use the native language in the way informants do.” The informant is related to as a teacher. What is learned is how the participant conceptualizes and verbalizes his experience. Informants give the researcher not answers to fixed predetermined questions, but the questions themselves. Informants help define what the researcher needs to learn in subsequent research. (Examples of research techniques with informants: unstructured and semi-structured interviews, diary studies, open card sorting, collaborative design exercises.)
  • SubjectSubjects are participants in social science research, upon whom hypotheses are tested. “Investigators are not primarily interested in discovering the cultural knowledge of the subjects; they seek to confirm or disconfirm a specific hypothesis by studying the subject’s responses. Work with subjects begins with preconceived ideas; work with informants begins with a naive ignorance. Subjects do not define what it is important for the investigator to find out; informants do.” (Examples of research techniques with subjects: usability testing, split testing, concept testing.)
  • Respondent – A respondent is any person who responds to a survey questionnaire or to queries presented by an investigator. “Survey research with respondents almost always employs the language of the social scientist. The questions arise out of the social scientist’s culture. Ethnographic research, on the other hand, depends more fully on the language of the informant. The questions arise out of the informant’s culture.” (Examples of research techniques with respondents: surveys, questionaires, structured interviews, closed card sorting.)
  • Actor – “An actor is someone who becomes the object of observation in a natural setting.” As with subjects and respondents, when participants are related to as actors, the terms of the description of the actor’s behaviors are those of the researcher, not of the participant. It should be noted, however, that in ethnographic research (and also in contextual inquiry, participants are interviewed as they are observed, which means the participant is still understood  primarily as an informant. The actor-informant teaches the researcher through showing and explaining in his own terms the significance of his actions, which allows the researcher to give (to use Clifford Geertz’s term) “thickness” to his descriptions of what he observes. (Examples of research techniques with actors: site analytics, business intelligence analysis, silent observation.)

Over the course of a research program, research participants may at various times be regarded as subjects, actors or respondents — but if the goal is to know what really motivates the participants, to understand how to engage them at an emotional level, and to cultivate an enduring relationship with them, it makes a lot of sense to begin by relating to research participants as informants, beginning with unstructured or semi-structured interviews.

By starting with an informant relationship with research participants researchers can develop a better idea of what matters to the participants, how they conceptualize and speak about these things, and most importantly how this motivates observable behavior. These insights (that is, findings that illuminate the inner life of participants) can focus subsequent research on the most relevant and impactful questions. It also improves the execution of the research by helping researchers use language that’s natural and understandable to participants, earning greater trust and cooperation, and minimizing misunderstandings. And in analysis researchers and planners will mine more valid insights from the data, since they understand the motives, thought process and language behind the responses and behaviors of the respondents, actors and subjects. And the insights will be accurate because they rely far more on fact than (often unconscious) assumptions.

The other types of research can then report in more quantifiable terms, using much larger samples, how many subjects or actors perform certain behaviors or how many respondents give one answer or another to certain questions on a survey or questionnaire — and these actions and responses will now carry much more meaning because now the researchers have subjective insights to complement the objective data.

Two more points worth making: 1) I haven’t mentioned segmentation in this article, but anywhere where I mention learning about research participants, I am talking about learning about segments of participants (defined by goals, needs, attitudes and behaviors), and understanding the similarities and differences among them. 2) Generally, it is in the role of informant that research participants provide findings that drive design and creative. Informants inspire empathy and creative approaches. Subjects, respondents and actors tend to yield information useful in making strategy decisions. Using the full range of qualitative and quantitative research methods together intelligently can enable strategists and designers to work together more effectively to harness the full power of experience design.

By understanding research better — recognizing the difference between research that produces subjective insights and research that produces objective data, by not mistaking them for rival methods for producing the same kinds of findings, and by understanding how they can be used together to gain a holistic picture of one’s customers that is far more than the sum of the facts — an organization becomes more capable of understanding its customers without sacrificing their individuality to empty statistics.

Dialogue: art-work, design-work, artisan-work

S:

My view is that art is made without reference to the receiver.
It is entirely ego-centric.
It is thrown out into the world and if someone understands and desires it, it’s a miracle.
Design is made with reference to others — which is why real design is human-centered design.
I want my art self-centered and my design human-centered!

J:

I wouldn’t say, “miracle.”
What would commissioned work be? Artisan work?

S:

Depends on the benefactor
If the benefactor sees the artist’s vision and identifies with it (through that “miraculous” congeniality), it’s still art…
…but if the benefactor doesn’t know how to let the artist do the art, or the artist doesn’t know how to defend the art from the benefactor’s attempts to control the art, it becomes artisan work.
and here’s a new thought…
If a client doesn’t know how to let a designer do human-centered design or the designer doesn’t know how to defend the design from the client’s desire to control the design — what gets done is artisan work.

****

Update May 21, 2017:

3 types of participants in a creation:

  • The producer – the party producing a work.
  • The sponsor – the party funding the production of a work.
  • The consumer – the party enjoying the benefit of a work.

3 categories of production:

  • Art-work – In art-work, the producer produces work guided primarily by the producer’s own judgment, with less concern for the personal standards of sponsor or consumers. The artist produces as if for himself as consumer, and the work is chosen or accepted by the sponsor, almost as if intercepted, as an artifact manifesting the artist’s personal judgment. In art, the producer (artist) has final judgment.
  • Design-work – In design-work, the producer produces work guided primarily by the consumer’s judgment, with deliberate deemphasis on the personal standards of producer or sponsor. The active judgment in design is empathic judgment: quality of judgment is ability to overcome personal judgment in order to judge by the consumer’s standards. The one using has final judgment. In design, the user (consumer) has final judgment.
  • Craft-work – In craft-work, the producer produces work guided by the sponsor’s judgment, with deliberate deemphasis on the personal standards of producer or consumer (assuming the consumer is not the sponsor). The craftsperson produces for a sponsor to the satisfaction of the sponsor. In craftwork, the sponsor (the one paying for the work) has final judgment.

Much pain in production arises from ambiguity or disagreement over the category of production. A sponsor believes what he is commissioning is primarily craftwork, being produced to his own personal satisfaction, when the producer thinks what is commissioned is either design or art. (A sponsor who lacks pluralistic awareness, due to autistic, narcissistic or naive realist tendencies, will not understand the difference between craft-work and anything else. It will simply become a control issue or clash of wills.)  Or a producer is hired to work as a designer, but sees himself as the final judge of the work. (This is inevitable when the producer lacks pluralistic awareness).

Of course, most work is a hybrid of all three, located in the middle regions of a three-axis gamut stretched between art-work, design-work and artisan-work — but even minor disagreements in the balance point can generate strain.

 

Pluritarian Pluriversalism

To someone born into an autistic universe controlled by a single set of strictly logical natural laws, the experience of empathy and the subsequent revelation of an empathic pluriverse redefines the meaning of miracle, and of transcendence, and of religion.

Before, miracles were exceptions to the laws of nature. After, miracles are the irruption of something in the midst of nothingness: other minds, each with a world of its own — each with the power to change the meaning of one’s own world.

Before, transcendence was defined in terms of an infinite reality standing beyond the finite objective world.  After, transcendence was defined in terms of an infinite reality standing beyond myriad finite objective worlds, each rooted in the elastic mind of a subject.

Before, religion was the attempt for an individual to commune with a transcendent reality with miraculous powers. After, religion was still the attempt for an individual to commune with a transcendent reality with miraculous powers, but the change in conceptions of transcendence and miracle means that it is the individual and the individual’s world that is transcended, and this means the route to transcendence is not around the world and one’s neighbors, but through them and their worlds. The activity of loving, respecting and learning from one’s neighbors is intrinsic to loving, respecting and learning from the infinite God who cannot be confined to any one world, however vast.

Myriad worship practices are needed to worship myriad aspects of an inexhaustible and inexhaustibly meaningful God. By this understanding, empathy is worship.

Design Thinking by committee

Combining the core insight of Design Thinking — “everything is design” — with the truism that “design by committee produces mediocrity”, it begins to appear that the widespread (mis)use of meetings to shape collective action might be one of the great engines of contemporary collective frustration. Much of our lives are mired in mediocrity because everything that matters most — our institutions, our processes, our approaches to solving big problems — end up essentially designed by committee.

*

In distinguishing design problems from other kinds of problems, my rule of thumb is this: if a problem involves interactions between free people and things of any kind (objects, services, communications, screens, ideas) that problem should be viewed as a design problem, approached with design methods, developed as a design system, and evaluated as a design. In this light, all kinds of things that seem to be management, strategy, engineering, marketing, etc. problems are seen as varieties of design problems.

What design thinking does is fully acknowledge the “people part of the problem” as central to its resolution and focusing its efforts on getting that part right. And the only way to do this is to include the very people who will, through their free choice (or rejection), make the resolution a success (or failure) as partners in the development of the solution.

Failing this, it will be necessary to handle the people part of the problem by 1) speculating on it, 2) ignoring it, or 3) eliminating it.

1) Speculation means remembering/assuming/guessing  on the needs and wants, conceptions and perceptions, attitudes and tastes — in short, the practical worldview — of the people involved in the people part of the problem. We human beings are much worse at this than we think, especially when we don’t regularly put our visionary clairvoyance to the test. It is not uncommon in the design world to hear design researchers cheerfully admit to an inability to predict how people will behave, where others in the room make bold predictions based on their own gut-level knowledge of how people are. (It pays to remember why the Oracle at Delphi identified Socrates as the wisest man in Greece!) People research teaches respect for the elusiveness of other people’s worldviews.

2) Ignoring the people part of problems means pulling the engineering parts of the problem (the sub-problems that are made up of creating systems of unfree, rule-governed elements) out of context and solving those in the hope that the people part will take care of itself (or that “marketing’s got that covered” or that the system can be tweaked after it is finished until people like it enough to accept it.) Fact is, a great many engineers choose a career in engineering because they prefer interacting with objects more than interacting with subjects, and they will tend to prefer solutions to problems that allow them to spend most of their time in the company of objects or teams of like-minded people building object-systems. And that is fine, as long as someone has their eye on the people part and provides context for the engineering problems that contribute to the solution.

3) Eliminating the people part of the problem sounds ominous and it ought to: it amounts to turning freely choosing people into unfreely complying people. It means destroying alternative choices through anticompetitive practices (like those employed by Microsoft in the 90s or Apple’s recent supply chain manipulations) or by finding ways to bypass choice and control behaviors directly either through coercion (legislation) or psychological manipulation (like behavioral economics. The purpose of this is to make people into engineerable elements, that is unfree, rule-governed, controllable, predictable elements of a profitable system. It was this mentality that predominated in 20th Century social engineering projects, which unfairly discredited the very concept of deliberate societal self-determination for a great many US citizens. Social engineering is a hellish totalitarian notion. Social design, however, is deeply liberal-democratic, and the future of liberal democracy depends on it.

But — getting back to the original thread — this means we must learn to see design problems wherever they occur — especially when they seem to be something other than design. It means also that we must adjust our response to them to allow the right mindset and methods. As Marty Neumeier pointed out, we cannot “decide our way through them, we must design our way through them.” Which, again, means meetings are the wrong format for shaping solutions. (Unless, like some Design Management people, you believe the right workshop techniques transforms committees into design teams. I remain skeptical. I’ve seen workshops produce much more kumbaya than eureka. Workshops are more productive than most meetings, but what is produced should not be confused with design. Workshops are better-designed meetings, not meetings that produce better design.)

Once again, I’m going to trot out Le Carre’s famous quote: “A desk is a dangerous place from which to view the world.” It is important to remember that a conference table is just a big desk for a committee to sit behind. No matter how many post-it notes, white board markers and ice-breaking games you try to add to it, a meeting is a meeting is a meeting. To design effectively we must rethink why we meet, how we meet, what we can expect from meeting, what thinking can only be done in non-meeting contexts.

Meetings are an effective tool, but like all tools, meetings have their proper uses and places where another tool might be better.

Design and trade-offs

For non-designers (and immature designers) the toughest part of design is trying on different trade-offs.

The reason it is so tough is this: while most people can shift between ideas with relative ease, it is harder to shift between conceptions — different logics of coherence and meaning that invest ideas with different significance.

Harder still is to allow new conceptions to animate perceptions. Old conceptions cling and highlight features of perception that would remain inconspicuous to fresh eyes. And each shift in design direction adds new relevancies without removing the old ones, so the problem becomes more insoluble with each iteration.

It is like memory: it is easier to learn on command than to forget. The old ideas, once seen, become hard to unsee. The old concepts, once learned become impossible to unlearn. Perception becomes almost cubistic — too many simultaneous perspectives are viewed at once.

Pluralistic play — the ability to flit between logics — to try on different conceptions and perceptions — this takes years of practice, and the practice can only start once a person has discovered the dimension of mind that multiplies the universe into innumerable overlapping everythings.

Universal Design Praxis

I find the term Design Thinking inadequate.

First, the term Design Thinking belongs to IDEO. As far as I know, they made the term up, they use it for marketing and it remains closely associated with them. It is uncomfortably too many things at once:  a semi-grassroots movement, a (vague) methodology, a bag of tricks, a style, an approach to problem-solving and a trademark.

But second, thinking is only one part of what goes on with Design Thinking. And in fact in Design Thinking thinking is demoted from its usual exalted position. In most situations in most organizations, making and doing activities are preceded by lengthy talking, making of cases, adducing of evidence, modeling, deciding, planning, and other activities of the head. But with Design Thinking, making and doing become more equal partners  with thinking in determining what will be thought and done and made. Hands and feet enter the picture and work alongside the head (and heart) to shape what transpires.

For this reason, I am inclined to characterize this way of working more as a practice than a way of thinking.

Even practice fails to go far enough, though, because a practice can still position a practitioner outside of what is being worked on. With design problems one struggles inside them, rather than working on them or puzzling over them. Anyone who has gone through the wringer of a deep design problem can tell you: design immerses, involves, challenges and changes people at an unnervingly fundamental level. This is why talk around design, design thinking and related movements like UX and service design can get a little breathless and zealous and quasi-religious: because it does stimulate — even forces — unexpected and profound self-transformations. Because of this — because the practice of doing/making/thinking iteratively feeds back into and self-modifies the doing/making/thinking and perceiving process, and the practitioners involved in it, it should be called a design praxis.

And since the active domain of design praxis is all systems involving both subjective free-willed, choice-making entities (a.k.a. people) and objective entities — and such systems are ubiquitous —  it might even be called Universal Design Praxis. According to this perspective, most problems are actually design problems. When we limit design to traditionally define design areas (graphic, product, digital, architectural, interior, fashion, and so on) we misdiagnose problems as engineering, marketing, management, economic, etc. problems — and usually end up factoring out the crucial element of free-will, and wind up treating people as beings to manipulate, control or coerce.

There is a moral/political dimension to design praxis: it works to engage human beings as free and appeals to free choice, and this also contributes to the whole movement’s quasi-religiosity

So here are the core principles of Universal Design Praxis:

  • Any development of systems comprising both objective and subjective (free-willed) components is best approached as a design problem. (This encompasses the vast bulk of human activity.)
  • Design problems are resolved through iterative cycles of first-hand immersion, collaborative reflection, collaborative making, testing, revision, etc. Whatever the specific techniques used, they are used with this thrust in this basic framework: go to reality to learn, to make, to relearn, to remake…
  • Design praxis changes the practitioner as the problem moves toward resolution — the practioner self-transforms into someone capable of seeing a solution that initially was invisible.
  • Design praxis involves reflective collaboration — multiple people working directly with realities (as opposed to speculating or recalling or applying expertise). Abstractions are derived afresh from direct exposure to reality (the reality of people, things, actions, institutions, places — whatever contributes to making a situation what it is).
  • Design praxis assumes, affirms,  appeals to, and amplifies free-will.

 

Gorging ouroboros

Gorging Ouroboros

Every philosophy is a philosophy of some kind of life.

For too many generations philosophers have philosophized about philosophizing to philosophers philosophizing about philosophizing.

This has turned philosophy into something exasperatingly inapplicable to anything important to anyone except a professional academic philosopher.

My belief (or self-interested prejudice) is that being a philosopher who philosophizes a life of human-centered design is a great privilege at this time in our culture.

Human-centered design lives at the intersection of many of our most problematic oppositions: theory-vs-practice, objectivity-vs-subjective, intuitive-vs-methodical, individual-vs-collective, revolution-vs-evolution, symbolic-vs-real, narrative-vs-fact, qualitative-vs-quantitative, holism-vs-atomism, coercion-vs-persuasion, technology-vs-humanities, natural-vs-artificial . . . , etc.

My philosophy feeds on the live problems and anxious perplexities that seize groups of diverse people when they collaborate to improve the lives of other people by changing social situations — physically, practically, symbolically and emotionally — and in this effort become so desperate to succeed that they are willing to stake or sacrifice their own cozy worldviews for the sake of sharing understandings with others.

I am convinced that philosophy can (and will soon) regain its relevance. It just needs a diet of something other than its own self-gorged self.

Overcoming empathy

A disempathic world view: “We may be accused of lacking empathy, but this supposed deficiency is actually an efficiency, not only because there are convenient statistical workarounds, but because the very object of empathy is entirely useless. People can and should be understood in terms of observable behaviors and attributes. Any invisible “agent” slipped under these observable realities is at best too vague or messy to manage, and in all likelihood superfluous or nonexistent.”

You can’t argumentatively disprove a philosophy of this kind — certainly not in its own terms. With respect to mere argumentation, it is not a matter for disproof; it is a matter for disapproval. But disapproval is not objective. It is subjective, and therefore not admissible as a valid argument to a mind who excludes all but objective criteria. Arguments about arguments will ensue, but objective minds are unable to grasp how this kind of argument is even possible, and therefore it also does not exist. So let’s not.

Luckily, we are not limited to mere argumentation. We are not Medieval Scholastics who must gather around the council table to establish theological truth through logical connections of doctrinal assertions.

We are children of the Enlightenment, and we know that we are not chained to the council table and books and figures and dogmas and arguments. We are able — and obligated! — to stand up and exit the room with all its shadowy abstract depictions Truth — and walk out into the sunlight of reality  to see how our truths perform when we test their fitness in helping us live effectively.

This is where design thinking and social scientific method become gloriously useful. Both take subjectivity as real and testable. This sounds abstract until you realize that the fates of businesses and organizations of all kinds hang on subjectivity.

On fighting well

I’ve been married for 23 years, exactly half of my life. I have two daughters. At times they have asked my wife and me how we’ve pulled it off. My answer has been: don’t try to avoid fights; learn to fight well. Not only is avoiding fights  impossible — fighting may very well be the point of marriage.

My design career began around the same time. And in many ways it has followed a parallel path — especially with respect to fighting. That’s not surprising really. Marriage and design are all about human relationships, and a key part of relationships is fighting.

But learning to fight well has been a long process, and part of the process was revising the very goal of fighting. I will relate the process as it played out with design, but if you reflect the lessons are more general. In fact the lessons are universal.

Early in my design career I believed fighting was an obstacle doing my design work. I had worked hard to develop good design skills and judgment and I was hired to exercise them — so get out of my way and let me work. Fighting well meant taking a stand and defending Good Design. Who knew what Good Design was? “Trust me!”

These fights were no fun, mainly because they were not winnable. The customer is always right.

So, a little later in my career, I came to see fighting as a fact of design work. Learning to fight well was a basic job requirement. It wasn’t enough to design something good, you had to convince others it was good, or it would be shot down. Fighting well meant learning to articulate reasons: why a proposed plan is the best one, why a particular design approach is likely to produce superior results, why a particular design ought to be approved. Fights became civil arguments. “Trust my arguments!”

But in the end, no matter how rational people were, decisions often came down to speculations — especially speculations on other people and their likely perceptions and responses and all the consequences that follow. And, it turns out, people are passionate about their beliefs about other people, rooted as they are in fundamental conceptions of human nature and reality itself… So often competing justifications would end up clashing and become once again, disputes about whose judgement was better.

Usability testing — when you could get the client to buy it — changed everything. Usability did not end fighting, but it dramatically changed the character of fights.  Speculations were now presented as guesses, not as precious convictions to defend against doubters, enemies of progress or taste, etc.  Fighting well meant allowing reality to play referee. Testing was what settled disagreements. “Trust the process!”

But in the last decade or so, I arrived where I am now. I started noticing something new — a new kind of fighting that happens, not despite research, but because of it. (This is due largely to a shift to research methods designed to drive innovation, as opposed to research designed to remove usability flaws.)

Here’s what I noticed. This kind of research was most valuable to teams not when it helps us learn new things, but when it helps us unlearn old things we thought we knew. When a team is stripped of the concepts that help it make sense of and navigate a problem space and it does not have any ready concepts to replace it, the result is a state of perplexity and a distictive existential pain. This pain makes people fight. They are intensely anxious to eliminate the perplexity. Anything that makes the escape from perplexity more difficult must be removed or suppressed, and unfortunately, this is other people and their incompatible ideas. But if you fight through this pain, and stay focused and faithful to your problem and the individuals on your team, something good always happens.

It reminds me of birth classes my wife and I took with our first pregancy. We were taught “Labor is what the term implies: hard work.” If you stay with the process and see the labor for what it really is — not the symptoms of something going wrong, but what naturally happens when things are going right — you can labor through the discomfort and give birth.

So this is where I am now: Fighting well means laboring through the birth of a truly new idea. “Trust the labor pains of creativity!”

I have found that when I am in the throes of conflict with teammates this idea helps me stay in the right idea-birthing state of mind.

And when you labor this way, design becomes more than a process for making ideas and things. It makes relationships.

 

Gewollt

Jasper Johns - The Critic Sees

Gewollt – Ge’-volt (adj.)

  1. deliberate, intentional, intended
  2. (piece of art) contrived, awkward, cheesy

*

Gewollt occurs when art, which is supposed to be the exhibition of concrete, tacit qualities, is produced by explicit and general categories.

Nietzsche said it well: “When a poet is not in love with reality his muse will consequently not be reality, and she will then bear him hollow-eyed and fragile-limbed children.”

*

Corporateness is a species of gewollt — the effect of production by predominantly explicit processes. This form of activity is effective for engineering processes, but as soon as it is applied to anything meant to seem human, anything produce by it will have hollow and soulless ring to it.

What makes a design compelling are concrete, tacit qualities that make it into the design — the capturing of something impossible to convey with language — the je ne sais quoi of the design that makes it irreplaceable by anything other than itself.

It is the art in design that makes it inexplicably resonant and desirable beyond its function and convenience.

*

Artists have an advantage. The work of artists takes place between the individual and the material.

With designers things are more complicated. Designers are usually working in teams, and the work is for others. The only way to infuse a design with art without allowing the design to become the personal expression of the designer and to devolve into art is to allow the designers to directly experience the people for whom they are designing, and their environments, their activities, their language — their world. Tacit empathy that cannot be conveyed through explicit findings reports are key.

The design of research approaches must not be understood solely in terms of data gathering activities, but rather the production of encounters between designer and worlds.

 

 

The Republic of Reality

represent |repri-zent|
verb [with obj.]

  1.  be entitled or appointed to act or speak for (someone), especially in an official capacity.
  2. constitute; amount to.
  3. depict (a particular subject) in a picture or other work of art
  4. formal state or point out (something) clearly

“Now that we are no longer fooled by these maneuvers, we see spokesmen, whoever they may be, speaking on behalf of other actors, whatever they may be. We see them throwing their ranks of allies, some reluctant, some bellicose, into battle one after the other.” – Bruno Latour


If knowledge is representative, this sense of representation (4) should not be too closely equated with (3) depicting or (2) constituting. It is better to emphasize its affinity with (1) acting or speaking on behalf of a reality.

Knowledge represents reality by being its spokesman in deliberation, conveying the considerations relevant to that reality, and negotiating for where that reality will figure into whatever is being discussed. If a representative speaks well for a reality, the reality will cooperate and reinforce his claim of representing his constituency. If he misrepresents a reality, the reality will undermine and discredit his representation by refusing to cooperate as the representative promised it would.

Again: our knowledge does not depict reality or make little idea-models that correspond to a reality — with our knowledge we politically represent a reality and conveys what it does and will do with respect to a problem. We are standing in for a reality and representing it in its absence.

Of course, it pays to confer with any reality we are seeking to represent, and be good students of that reality so we can represent it ever more faithfully. When we are representing people we may have conversations with them. Or we may immerse in their lives, interact and participate so we can get first-hand first-person knowledge of what is going on. If we are representing non-human things we might have to watch, form hypotheses, interact, experiment, revise — again, so we can be taught by the reality how to represent it.

And, as Latour never tires of pointing out, every social situation is a heterogeneous collection of human and non-human actors.

Since design is nearly always intervening in some social situation in order to change it, what design researchers really do in the field is confer with the full social reality in order to understand it and fully represent it. And once hypothetical solutions are found, design researchers return to the social situation to confer with it about how it might react to them. Good designers are like good politicians — always shaking hands, knocking on doors, staying in touch, winning support.

 

Deliberation and experiment

The fewer participants you include in a deliberative process, the simpler the process can be. A solitary mind, thinking alone about personal experiences can come to a resolution pretty quickly most of the time.

Each person you include complicates the deliberative process exponentially. Now there is a wider range of experiences, thinking styles, values, emphases and goals that must be considered and satisfied.

When you start including non-human actors in the deliberative process, which means adding experimentation to the mix — now you have something incredibly complex. If the group is trying to understand non-human actors, we now have something like a scientific community. If the question is broadened to include both humans and non-humans combined, we have something a lot more complicated: a society.

And if you try to include all humans and all non-humans, you are now in the realm of the impossible. But it is probably a worthy impossibility.

Douche Theory 2×2 Model (R)

douche-theory

I plan to use this diagram to help me explain different approaches to design strategy.

Human-centered design helps Douche organizations become Keepers.