All posts by anomalogue
Protected: Generation X and the caring crisis
Why qualitative research?
Quantitative research methods (as valuable as they are) can never replace interviews and ethnographic research. Despite what many UXers think, the essential difference between ethnographic research and other forms of qualitative research is not merely that it observes behavior in context, but rather, as Spradley notes in The Ethnographic Interview, that in ethnographic research the person being researched plays a role in the research quite different from that of other methods: the role of informant (as opposed to subject, respondent, actor, etc.). An informant doesn’t merely provide answers to set questions or exhibits observable behavior. An informant teaches the researcher, and helps establish the questions the researcher ought to attempt to understand — questions the researcher might never have otherwise thought to ask. An informant is far more empowered to surprise, to reframe the research, and to change the way the researcher thinks. In ethnographic research the researcher is far less distanced and intellectually insulated from the “object” of study, and is exposed to a very real risk of transformative insight.
This attitude toward human understanding goes beyond method, and even beyond theory. It implies an ethical stance, because it touches on the question of what a human being is, what constitutes understanding of a human being, and finally — how ought human beings regard one another and relate to one another.
*
The passage that triggered this outburst, from Hannah Arendt’s The Human Condition:
Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: “Who are you?” This disclosure of who somebody is, is implicit in both his words and his deeds; yet obviously the affinity between speech and revelation is much closer than that between action and revelation, {This is the reason why Plato says that lexis (“speech”) adheres more closely to truth than praxis.} just as the affinity between action and beginning is closer than that between speech and beginning, although many, and even most acts, are performed in the manner of speech. Without the accompaniment of speech, at any rate, action would not only lose its revelatory character, but, and by the same token, it would lose its subject, as it were; not acting men but performing robots would achieve what, humanly speaking, would remain incomprehensible. Speechless action would no longer be action because there would no longer be an actor, and the actor, the doer of deeds, is possible only if he is at the same time the speaker of words. The action he begins is humanly disclosed by the word, and though his deed can be perceived in its brute physical appearance without verbal accompaniment, it becomes relevant only through the spoken word in which he identifies himself as the actor, announcing what he does, has done, and intends to do.
*
The dream of quantitative research rendering qualitative research obsolete might be one more instance of an age-old fantasy: a world of people who are seen and not heard, who obey our predictions and commands, to whom we can dictate terms. Such beings cannot remind us of the difference between reality itself, and one’s own conceptions of it — and they leave the mind in peace to to be “its own place, and in itself can make a Heaven of Hell.” Hell is not other people, per se. It is speaking people showing us what we’d rather not know, which can strip us of what we knew but can no longer believe.
*
(Maybe we lack faith in our capacity to recover from loss of faith?)
Useless (or worse)
When chaos is experienced, a failure of reason has already occurred. In chaos we encounter realities our reason is not equipped to order and make sense of. This is the experience of perplexity, where we relive the horror of birth.
The only people in the world perverse enough to find meaning in such meaninglessness are philosophers. Wittgenstein said it best: “A philosophical problem has the form: I don’t know my way about.”
*
We prefer to believe the world is discovered bit by accumulated bit in a vacuum of space and knowledge. We want to believe in a world that is created ex nihilo. What is we have is established, and what isn’t is nothing.
We hate to believe in a world that is articulated from chaos, because we hate the consequence: the order we have lent to the world which has made it familiar and predictable could suddenly recede and shock us with raw alienness.
*
This possibility — that the world can be revealed as strange — that makes people hate their neighbor. It is the neighbor, with his strange views, peculiar habits, and outlandish tastes, who jointly holds the potential to defamiliarize the world. The potential, though, is only actualized voluntarily by ourselves. Each person holds the power either to open the door to the neighbor, or to bar it. If the neighbor is invited in, if his views are seriously entertained, the two gathered in such a spirit of hospitality and truth are in a position to recognize that reality and our idea of reality are not identical. In some deeply disturbing and inexpressible way, reality transcends idea. Without the disruption of the neighbor, idea eclipses what is beyond idea, and becomes idol.
But the door can be barred. We are free to abide in the mind. “The mind is its own place, and in itself can make a Heaven of Hell.” By withholding the status of “neighbor” from all but the like-minded — those who ditto our opinions, who agree with us that the details of reality that appear to contradict our views (or more subtly the exclusive validity of our views) are irrelevant (if not outright deceptions), who share our antipathy toward our non-neighbors and agree with us that entertaining their ideas is fruitless at best (and possibly corrupting) — we find willing partners in reducing the world to pure idea. The impurity rejected is that of reality who transcends mere idea.
*
We stabilize our sense of reality through a variety of intertwined methods. One of these methods is by successfully observing and describing the world to ourselves. Another is to reliably anticipate or predict events, or even better to influence or control them. But perhaps the most important method for creating a solid sense of reality is to find agreement with others. This last method can compensate for the absence of the others.
*
[Solipsism] “is rare in individuals–but in groups, parties, nations, and ages it is the rule.”
*
When a group agrees with itself that whatever appears to be an anomaly is mere noise, or error, or deception, or irrelevance, it is able to avoid (or at least postpone) confrontation with anomalies, which are the sparks of chaos, the pinholes in our knowledge. Anomalies remind us how much more there is to things than we possess as individuals, or as members of a particular group.
It is easier to love the reality we have made for ourselves — our own sense of truth — than it is to love reality. Reality challenges us, makes claims on us, changes us. If we think of ourselves as discrete, unchanging, self-consistent beings, reality threatens our mortality. If we think of ourselves as connected, evolving, expanding creatures, reality offers us perpetual natality.
*
We hate the possibility of the situation that requires the aid philosophy, so we deny that possibility and we deny the use of philosophy. Philosophy is a waste of time at best, and most likely corrupting.
But perhaps there’s some validity to the suspicion. Like generals thrive on outbreaks of war, and doctors thrive on outbreaks of disease, philosophers thrive on outbreaks of disillusionment.
The slipperiest slope
The slippery slope argument is the slipperiest slope. In fact, it is the slipperiness itself, a universal lubricant that creates a friction-free abstract world where the slightest tilt automatically dumps whatever sits on it into an abyss of catastrophic consequences. The “friction” it removes is that of human judgment and responsibility — our ability to decide to change course.
Supra-individual mind
Every thought thinkable by an individual mind has already been thought. Future thoughts will come from people who know how to think collaboratively beyond their own individual capacity as responsible participants in a supra-individual mind.
This idea should not be mistaken for common “collectivism”. It is the very opposite of the mob mentality, where each individual is reduced to what all human beings have in common, becoming roughly identical, and behaving according to animal tribal instinct. Supra-individual thinking makes use of intellectual differences as well as commonalities. It is also different from hierarchical team thinking, where one mind understands the problem completely and then enlists the help of others to manage and execute. Supra-individual thinking means more than one person is required to participate if an idea is to be fully understood, so no one person has the “vision” in its entirety. Supra-individual thinking is also different from the kind of thinking that comes from (relatively) homogeneous groups, where once an idea is conceived by one member of the group, all are instantly and effortlessly able to grasp the idea, because arriving at the idea was simply a matter of quickness or luck. Supra-individual thinking arrives at agreements, but not agreements where each person holds an identical conception and opinion, but rather where each person holds conceptions and opinions compatible with the others in guiding collaborative action. And finally supra-individual thinking is not a division of labor among experts in different disciplines. The coherence is not mere systematization of separate black-box parts, but organic, conceptual coherence. Supra-individual thinking is unified intuitively and tacit-practically as well as rationally.
In collaborative thought, the group somehow comes to know something coherently, which is only later completely understood by some or all of the group, but in the meantime is effectively applied to real-world problems.
*
Supra-individual mind is similar to common sense, in the meaning of “the sense of reality arising from the five senses perceiving together”. It’s the blind men and the elephant story, except with temperamental/psychological differences substituted for circumstantial ones.
*
Supra-individual mind is the concrete actualization of pluralism. It begins with tolerance and skepticism, but then moves far beyond them.
Geertz on irony
Geertz: (From his essay “Thinking as a Moral Act”):
“Irony rests, of course, on a perception of the way in which reality derides merely human views of it, reduces grand attitudes and large hopes to self-mockery. The common forms of it are familiar enough. In dramatic irony, deflation results from the contrast between what the character perceives the situation to be and what the audience knows it to be; in historical irony, from the inconsistency between the intentions of sovereign personages and the natural outcomes of actions proceeding from those intentions. Literary irony rests on a momentary conspiracy of author and reader against the stupidities and self-deceptions of the everyday world; Socratic, or pedagogical, irony rests on intellectual dissembling in order to parody intellectual pretension.”
It seems to me that systems thinking — at least thinking about systems in which the thinker is a participant — might require a certain degree of irony. Our experience of being caught up in a system is one thing, but what is required to adjust or change the system is another — and the connection is rarely obvious. That experience is an intrinsic part of the workings of many systems, particularly management systems.
Limits of the explicit
Explicit forms of understanding and communication (explicit truth) can represent only some aspects of reality. In conflicts between rationalism and irrationalism, enlightenment and romantic ideals, suits and creatives, what is at stake is the leftover reality — its nature, its unity and/or multiplicity, how/whether truth can be established/shared, and how it relates to those realms of reality that can be known and spoken of explicitly.
My own hunch is that the non-explicit aspects of reality are precisely those that matter to us, and the near-universal requirement that things be known and spoken of in an explicit mode serves as a filter that systematically filters the non-explicit from consideration in most collective endeavors.
I also think the non-explicit aspects of reality are precisely those that most need to be agreed upon and shared, but this agreement and sharing is different from agreement on fact or sharing a belief in the validity of an argument.
Conserving, simplifying, forgetting
When a person calls himself a “conservative” what precisely is it that is conserved? Is it ideas? Do conservatives wish to keep valued ideas intact and pure?
Or is it a wish to conserve our limited store of moral energy? Despite what we would like to believe, we cannot just will this energy into existence, because will itself is constituted of this energy.
And even if energy were unlimited, time is indisputably limited. If we so expend most of our energy and time sifting through a near-infinite number of details, then wrestling to organize the mess into something clear and cohesive, wouldn’t the result of this effort be so complicated and unwieldy that our efforts would be hopelessly encumbered (not to mention pleasureless)?
It seems our choice is somewhere on a continuum ranging between “analysis paralysis” in the face of innumerable disorganized facts on one hand an or decisive, energetic action based on simplification verging on willful ignorance on the other. To put it in Yeats’ words, “The best lack all conviction, while the worst / are full of passionate intensity.” I think this tendency grows more and more exaggerated as the old fundamental thought-structures of a culture begin to give out under the pressures of new social conditions, and new underdeveloped and over complicated ones vie (lamely) to replace them.
*
Does change resulting from consideration of new and multiple perspectives necessarily mean appending and complicating our idea-world, and making it increasingly unlivable? Probably at first. But thinking deeply can also have a simplifying effect. But this simplification itself takes time and energy, and modes of thinking many people find even more uncomfortable than dealing with baroquely-rehacked, elaborately epicycled and recycled concepts.
Perhaps it is not over-simplification that makes ideologies so damaging to the world — since, after all, all thinking and all abstraction involves selective forgetting and remembering (what we call discerning relevance and discovering generalities) — but rather that the simplifications take into account only what one group or another considers relevant.
Shibbolethargy
Shibbolethargy: A form of intellectual laziness which uses the tools of thought (ideas, concepts, arguments and symbols) to create an appearance of rigorous thought, when in fact the true aim is to signal one’s membership in some particular tribe (and consequently unconditional opposition to other tribes).
At the root of shibbolethargy is the desire to evaluate ideas and actions ad hominem rather than on their own merits, while appearing to rely on principle and reason.
The attitude a shibbolethargic critic strikes is this: when confronted by an uncomfortable, semi-/un-comprehended idea, the most efficient means to evaluate it is to trace it back to the root, to see from what ground the idea has grown (rather than take the opposite course — which requires more trust, time and work — to judge the tree by its fruits). The root of the idea is the believer. If the believer is found to be a victim/perpetrator of some pernicious, delusional ideology, then by extension the idea is contaminated, and all efforts to understand the idea will at best be unfruitful and at worst can result in ideological contamination.
In the end, while many words may be used, many elaborate arguments, memorized and recited, many stories told both anecdotal and historical, no thought has been done and no new understanding has been found. The old understanding is defended and preserved, not so much through understanding and responding to other ideas, but rather through proving (solely to the satisfaction of the defender) that understanding and responding to other ideas is unnecessary — and probably dangerous to boot. In other words, that one is unwilling to see why he ought to think something he has not already thought.
Decision-making scenarios
Scenario 1 (thesis)
A: “Maybe this will work…”
B: “Before we commit the effort, can you explain how it will work, assuming it might, keeping in mind we have limited time and money?”
A: “I think so. Give me a day.”
B: “We don’t have a day to spare on something this speculative. Let’s come up something a little more baked.”
… and [eventually, inevitably]
B: “So, what are the best practices?”
Scenario 2 (antithesis)
A: “I have a hunch this will work. Let’s go with it.”
B: “Can you explain how it will work?”
A: “Trust my professional judgment. My talent, training, experience, [role, title, awards, track record, accomplishments, etc.] distinguish my hunches.”
Scenario 3 (synthesis)
A: “I have a hunch this might work. Hang on.” … “Whoa. It did work. Look at that.”
B: “How in the world did that work?”
A: “I don’t know. Let’s try to figure out why.”
Shhhhhhh
Here’s what I learned from the Pragmatists (mostly via Richard J. Bernstein, who has probably had a deeper and more practical impact on how I think, work and live than any other author I’ve read): An awful lot of what we do is done under the guidance of tacit know-how.
After we complete an action we are sometimes able to go back and account for what we did, describing the why, how and what of it — and sometimes our descriptions are even accurate. But to assume — as we nearly always do — that this sort of self-account is in some way identical to what brought these actions about or even what guided them after they began is an intellectual habit that only occasionally leads us to understanding. Many such self-accounts are only better-informed explanations of observed behaviors of oneself, not reports on the actual intellectual process that produced the behaviors.
To explain this essential thoughtlessness in terms of “unconscious thoughts” that guide our behavior as conscious ones supposedly do in lucid action is to use a superstitious shim-concept to maintaining this mental/physical cause-and-effect framework in the face of contrary evidence. I do believe in unconscious ideas that guide our thoughts and actions (in fact I’m attempting to expose one right here), but I do not think they take the form of undetected opinion or theories. Rather they take the form of intellectual habits. They’re moves we just make with our minds… tacitly. Often, we can find an “assumption” consequent to this habitual move and treat this assumption as causing it, but this is an example of the habit itself. It is not the assumption there is a cause that makes us look for the cause, it is the habitual way of approaching such problems that makes us look for an undetected opinion at the root of our behaviors. We don’t know what else to do. It’s all we know how to do.
*
I’m not saying all or even most behavior is tacit, but I do believe much of it is, and particularly when we are having positive experiences. We generally enjoy behaving instinctually, intuitively and habitually.
*
Problems arise mainly when one instinct or intuition or habit interferes with the movements of another. It is at these times we must look into what we are doing and see what is unchangeable, what is variable and what our options are in reconciling the whole tacit mess. The intellectual habit of mental-cause-physical-effect thinking is an example of such a situation. Behind a zillion little hassles that theoretically aren’t so big — no bigger than a mosquito buzzing about your ears — is the assumption that we can just insert verbal interruptions into our stream of mental instructions that govern our daily doings without harming these doings. As I’ve said before, I do think some temperaments operate this way (for instance, temperaments common among administrators and project managers), but for other temperaments such assumptions are at best wrong, and at worst lead to practices that interfere with their effectiveness.
Software design and business processes guided by this habit of thought tend to be sufficient for verbal thinkers accustomed to issuing themselves instructions and executing them, but clunky, graceless and obtrusive to those who need to immerse themselves in activity.
*
It is possible that the popular “thinkaloud” technique in design research is nothing more than a methodology founded on a leading question: “What were you thinking?” A better question would be: “Were you thinking?”
*
The upshot of all this: We need to learn to understand how the various forms of tacit know-how work, and how to research them, how to represent them in a way that does not instantly falsify them, and how to respond to them. And to add one more potentially controversial item to this list: how to distinguish consequential and valuable findings documentation versus mere thud-fodder which does nothing in the way of improving experiences, but only reinforces the psychological delusions of our times. If research can shed this inheritance of its academic legacy — that the proper output of research is necessarily a publication, rather than a direct adjustment of action — research can take a leaner, less obtrusively linear role in the design process.
Pluralism, education, competition, and brand
Some forms of competition support pluralism, and some forms of competition undermine it. This fact has become conspicuous to me looking at the issue of school competition.
If K-12 schools were to compete like universities, creating areas of distinction, basing their claims of excellence on the accomplishments and reputations of faculty and alumni, that would be a form of school competition that would generate diverse approaches to education, suitable to a wide variety of adult destinies. But if school competition were to become a matter of who produces the highest standardized test scores, I think it would have the opposite effect. The differences would center around pedagogical techniques for approaching as closely as possible a predetermined ideal.
*
I wish I could find the source, but years ago I read an article that claimed that what was different about the American business culture — the very secret of its flourishing — was its nearly-reckless environment of forgiveness, which encouraged risk, experimentation, optimism and consequently innovation. In Japan, if you took a risk and blew it, that was it for your career. In America, you were admired for your daring.
My question is this: Is our educational system encouraging or undermining this kind of inventiveness. Historically, how much has America’s success rested on technical proficiency — math and science — and how much on sheer confidence? Maybe those ludicrously high self-esteem scores of our students, so frequently ridiculed (most recently in Waiting for Superman) are actually a success indicator.
My fear, to put it in brand terms, is that the USA has turned its back on its brand, and has committed itself to becoming and international commodity. Our educational system is part of our unconscious national brand activation.
*
And to circle this whole mess around to the start, I think what attracts me to brand is that competition between brands, to the degree that the brands really are positioned against one another, is a pluralistic mode of competition. Multiple standards of excellent compete against one another for business.
Research: intuition transference
I’m trying to develop a thought, and I suspect it’s already been worked out and articulated somewhere, but it sure isn’t present in the business world. It’s related to a point a friend made to me recently, that much of anthropology (and of qualitative research in general) is over-focused on language and ignores much of the pre-/non-linguistic concrete reality that constitutes our private and cultural lives.
As designers, language is a big part of what we work with, but as most people will admit, the best designs are great because they relieve us of the necessity to think in language. We just use our tacit know-how and accomplish what we wanted, without ever verbalizing the means or the ends. Designs that require users to stop and verbalize everything as they go are inadequate to varying degrees, based, I think, on the temperament of the user. I am convinced some people live their lives in verbal self-dialogue on most matters, oscillating between verbal thought and execution of what is thought, where others lose themselves in tacit activity, and every requirement to think verbally is an unwelcome interruption. This has serious UI design implications, because the former wants things spelled out explicitly, where the latter is feeling for intuitive cues largely invisible to many users.
I’m the second kind of temperament, and it really is why I don’t like to look at clocks, lists or timesheets, because it destroys the continuity of my activity. Even when I’m working in words, the words are not explicit questions and answers, but more like blocks I’m mutely playing with. I think this is a Wittgenstinean thought: I’m developing a tacit know-how in the use of language to do some particular thing that I can’t yet verbalize, not entirely unlike building a house using a command language.
I think language is a very flexible instrument, and based on how well developed it is, it is able to justly articulate much of what goes on in the tacit practical world, and once it is able to do this, it becomes instrumental, capable of being used in planning and executing. My real question is this: how valuable an investment is the development of language in design projects? What are the possible tradeoffs?
- We can inadequately describe the worldviews of our designands (sorry, experimenting with a coinage), and save time, and money at the expense of articulate understanding and design quality.
- We can adequately describe the worldviews of our designands, and gain articulate understanding and design quality at the expense of time and money.
- We can dispense with description of worldviews of our designands, and gain design quality for less time and less money, at the expense of articulate understanding.
Here’s a thought: when we write an ethnography, what we are really doing is designing language and models to help some particular audience cultivate some particular relationship with people of some culture. This sounds functionalist, but I think it sort of protects us from mere functionalism in the way that phenomenology protects metaphysics precisely by setting it outside the domain of its inquiry. This approach protects the dignity of informants by throwing out every pretense of comprehending them as people, and instead comprehending what is relevant to relating to them.
The role of design researcher
In most places I’ve worked, design research is conducted primarily or exclusively by people playing a researcher role. The researcher’s job is to learn all about the users of a system a team is preparing to design, to document what they have learned and then to teach the design team what they need to know to design for these users. Often the information/experience architect(s) on the project will conduct the research then shift from the researcher role to a designer role. Often content and visual designers will (optionally) observe some of the sessions as well. But it is understood that in the end, it will be the research findings and the testimony of the researcher that will inform the design in its various dimensions (IA, visual, content, etc.).
It is time to question this view of research. When a design feels dead-on perfect and there’s something about it that is deeply satisfying or even moving, isn’t it normally the case that we find that rightness defiant of description? Don’t we end up saying “You just have to see it for yourself.” And when we want to introduce two friends, we might try to convey to them who the other person is by telling stories, giving background facts or making analogies, but in the end we want our friends to meet and interact and know for themselves. Something about design and people — and I would argue, the best part — is lost in descriptions.
My view is that allowing researchers and research documentation to intercede between users and designers serves as a filter. Only that which lends itself to language (and to the degree we try to be “objective”, to the kind of unexpressive and explicit language least suited to conveying the je ne sais quoi qualities that feed design rightness) can make it through this filter. In other words, design documentation, besides being half the cost of reseach not only provides little value, it subtracts value from the research.
What is needed is direct contact between designers and users, and this requires a shift in the role of researcher and in research documentation. The role of researcher would become much more of a facilitator role. The researcher’s job now is to 1) determine who the users are, 2) to ensure that research participants are representative users, which means their screening responsibilities are increased, 3) to create situations where designers can learn about users directly from the users themselves, not only explicitly but also tacitly, not only observationally but interactively, 4) to help the designers interpret what they have learned and to apply it appropriately to their designs.
In this approach, design documentation does not go away, but it does become less of the primary output of research, and more of a progress report about the research. The primary tangible output of the research should be design prototypes to test with users, to validate both the explicit and tacit understandings developed by the design team. But the real result of research is the understanding itself, which will enable the team to produce artifacts that will be indescribably right, seeing that this rightness has been conveyed directly to the team, not forced through the inadequate medium of description.
Having a place
Reading Gilbert Ryle’s explanation of the expression “in my head”, I reflexively asked a Nietzschean question: Why would we be satisfied with understanding thoughts to be located in our heads, as if they occupied a space? Certainly, a thought process could lead us to that idea, and (collective) intellectual habit could preserve it, but could there be something satisfying or comforting about the idea that has made us more hospitable toward it? I recalled a passage from Hannah Arendt’s Human Condition:
The profound connection between private and public, manifest on its most elementary level in the question of private property, is likely to be misunderstood today because of the modern equation of property and wealth on one side and propertylessness and poverty on the other. This misunderstanding is all the more annoying as both, property as well as wealth, are historically of greater relevance to the public realm than any other private matter or concern and have played, at least formally, more or less the same role as the chief condition for admission to the public realm and full-fledged citizenship. It is therefore easy to forget that wealth and property, far from being the same, are of an entirely different nature. The present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole, clearly shows how little these two things are connected.
Prior to the modern age, which began with the expropriation of the poor and then proceeded to emancipate the new propertyless classes, all civilizations have rested upon the sacredness of private property. Wealth, on the contrary, whether privately owned or publicly distributed, had never been sacred before. Originally, property meant no more or less than to have one’s location in a particular part of the world and therefore to belong to the body politic, that is, to be the head of one of the families which together constituted the public realm. This piece of privately owned world was so completely identical with the family who owned it that he expulsion of a citizen could mean not merely the confiscation of his estate but the actual destruction of the building itself. The wealth of a foreigner or a slave was under no circumstances a substitute for this property, and poverty did not deprive the head of a family of this location in the world and the citizenship resulting from it. In early times, if he happened to lose his location, he almost automatically lost his citizenship and the protection of the law as well. The sacredness of this privacy was like the sacredness of the hidden, namely, of birth and death, the beginning and end of the mortals who, like all living creatures, grow out of and return to the darkness of an underworld. The nonprivative trait of the household realm originally lay in its being the realm of birth and death which must be hidden from the public realm because it harbors the things hidden from human eyes and impenetrable to human knowledge. It is hidden because man does not know where he comes from when he is born and where he goes when he dies.
Not the interior of this realm, which remains hidden and of no public significance, but its exterior appearance is important for the city as well, and it appears in the realm of the city through the boundaries between one household and the other. The law originally was identified with this boundary line, which in ancient times was still actually a space, a kind of no man’s land between the private and the public, sheltering and protecting both realms while, at the same time, separating them from each other. The law of the polls, to be sure, transcended this ancient understanding from which, however, it retained its original spatial significance. The law of the city-state was neither the content of political action (the idea that political activity is primarily legislating, though Roman in origin, is essentially modern and found its greatest expression in Kant’s political philosophy) nor was it a catalogue of prohibitions, resting, as all modern laws still do, upon the Thou Shalt Nots of the Decalogue. It was quite literally a wall, without which there might have been an agglomeration of houses, a town, but not a city, a political community. This wall-like law was sacred, but only the inclosure was political. Without it a public realm could no more exist than a piece of property without a fence to hedge it in; the one harbored and inclosed political life as the other sheltered and protected the biological life process of the family.
It is therefore not really accurate to say that private property, prior to the modern age, was thought to be a self-evident condition for admission to the public realm; it is much more than that. Privacy was like the other, the dark and hidden side of the public realm, and while to be political meant to attain the highest possibility of human existence, to have no private place of one’s own (like a slave) meant to be no longer human.
*
We will have a place of our own, one way or another. If we cannot have it in physical space, we will create that place socially. And failing that, we will establish it in our own mind and live inside our own private place.
*
Giving a person a place in your own life is an act of humanity.
Designs and gifts
In honor of Hanukkah and Christmas, two great gift-giving holidays, this post is about gifts.
*
Agreement does not (only) mean correspondence of belief. More than that, it means compatibility of belief. It means the possibility of relationship in the medium of understanding, activity and purpose. A truly agreeable gift signals agreement in this expansive sense.
*
From Clifford Geertz’s “From the Native’s Point of View”:
…Accounts of other peoples’ subjectivities can be built up without recourse to pretensions to more-than-normal capacities for ego effacement and fellow feeling. Normal capacities in these respects are, of course, essential, as is their cultivation, if we expect people to tolerate our intrusions into their lives at all and accept us as persons worth talking to. I am certainly not arguing for insensitivity here, and hope I have not demonstrated it. But whatever accurate or half-accurate sense one gets of what one’s informants are, as the phrase goes, really like does not come from the experience of that acceptance as such, which is part of one’s own biography, not of theirs. It comes from the ability to construe their modes of expression, what I would call their symbol systems, which such an acceptance allows one to work toward developing. Understanding the form and pressure of, to use the dangerous word one more time, natives’ inner lives is more like grasping a proverb, catching an allusion, seeing a joke — or, as I have suggested, reading a poem — than it is like achieving communion.
*
“Understanding the form and pressure of, to use the dangerous word one more time, natives’ inner lives is more like grasping a proverb, catching an allusion, seeing a joke — or, as I have suggested, reading a poem…” or knowing how to design for them.
A design that makes sense, which is easy to interact with and which is a valuable and welcome addition to a person’s life is proof that this person is understood, that the designer cared enough to develop an understanding and to apply that understanding to that person’s benefit.
A good design shares the essential qualities of a good gift.
*
A post from an old blog:
When one person gives another person a perfect gift, the gift is valuable in three ways:
- The gift itself is intrinsically valuable to the one receiving it.
- The fact that the giver knows what gift the receiver will love demonstrates that the giver cares enough to reflect on what the receiver will value, and that this effort has yielded real insights. The perfect gift is evidence that the giver cares and understands.
- The gift becomes symbolic of the receiver’s own relationship to the world — an example what they define as good. The perfect gift becomes a concrete symbol of the receiver’s ideals, which the receiver and others can see and understand, and contributes to the receiver’s own self-understanding and social identity.
Great design experiences are similar to gifts. When a design is successful the person experiencing the design gets something valuable, sees tangible proof the provider of the design understands and values them, and receives social affirmation that helps them feel at home in our shared world.
Definition
Heaven is the world once it knows who it is, and what it is not.
Finished The Human Condition
I finished Hannah Arendt’s The Human Condition this morning.
A passage from the last chapter was especially significant, because it hit several of my own core themes from the last several years (which were, in fact, indirectly implanted by Arendt herself, via Richard J. Bernstein):
- That humanity is most fully actualized when automatic behavior is transcended in the conscious decision to think, deliberate with others, and act intentionally.
- Seeing human life from an exterior position rather than an interior point has ethical consequences.
- That behaviorism wishes to understand humanity as that which is observed from outside (empirically observed, like science observes its objects), but that only human beings restricting themselves biological-social automatism can be understood in this manner. Fully actualized human behavior requires speech, and it has the power to change the worldview of the “observer”, so that theoretical frameworks and research methods are in question, and the observer is deprived his uninvolved, neutral, outsider perspective.
- That science is significant as a cultural phenomenon, an extremely effective method of coming to agreements, but that these agreements are not the only kind of agreements possible between human beings, nor are they the highest. (However, they are the easiest agreements to reach, and in a world starving for agreement and its attendant stability, this value can eclipse all others combined. And in fact it has, even in spheres of human activity that call for higher forms of agreement, namely in education, in government and in business. Business defines its goals strictly in terms of quantitative profits largely because this is the easiest standard to set and the hardest to argue against. It makes people feel all hard-nosed and tough to assert it against their inclinations, but in fact this is a cheap and easy move, and it is not a heroic sacrifice, but a cowardly self-betrayal.)
- That much of commercial life is dominated by behaviorist psychology, and the scientific mode of agreement, both of which eliminate the “revelatory character of action” (which, in Arendt’s definition, includes speech). (“Revelatory character” is antithetical to predictability. Whether predictability is a defense against revelation, or suppression of revelation is a means to predictability, the need to predict and the desire to not be surprised are two of the most powerful, unquestioned and universal corporate values. (This twofold force is singlehandedly responsible for that repellent quality we call “corporateness” (and constitutes the single most obstinate impediment to innovation, which is simultaneously celebrated in word and undermined in action in most groups.)))
Here’s the passage:
Influence over reverence
Quantity of reverence matters less than quality of influence.
To revere someone excessively — to make a person an object of worship instead of a teacher with a relevant, practical and surprising lesson — can even be a defense against influence.
*
It is easy to revere something of one’s own invention, but the kinds of disruptive revelations a teacher can impart to a willing student cannot be invented by any individual.
*
Idolatry displaces involved relationship with an infinite subject with a relationship to a finite object. An object relationship is distanced, defined and possessable.
(In Buberian terms, idolatry relates to Thou in I-It terms.)