Category Archives: Philosophy

Word torture

There is much to hate about Boomers, but their most hateful fault is their sexuality.

This sexuality is characterized by two equally unfortunate ideals: frankness and naturalness. Deployed in tandem, these ideals destroy everything mysterious and fascinating about love, and reduce it all into stinky, sweaty, hairy, biodegraded mess encapsulated by the Boomer’s favorite word for what most enjoy doing to each other: “make love”.

I think I speak for my generation when I say I’d much rather make war.

Some social critics have blamed the divorce pandemic of the 1970s on the Boomer’s infamous narcissism, egocentricity and irresponsibility. There is no doubt those Boomer vices played a significant role.

But I think there is a more direct and obvious explanation: the horny grossness of Boomers just made them unable to stand being around each other.

Admittedly, this is hate speech of the worst kind. But I blame society, both for my hate and for my hypocritical embrace of this hate. And I blame this particular unrepentant outburst on the Boomer author of a horrible book I’m trying to read read now — a book on Kabbalah.

How can I be expected to exercise moral self-discipline, after days of writhing, retching and throwing up in my mouth over sentences like this:

His wife said, “Raphael, why do you waste your energy on trying to make books for Jews?” He would reply, “Because your father, his memory is a blessing, wasted his energy trying to make books for Jews, and when I married you, his business was part of your dowry. And besides, I love making Jewish books almost as much as I love making love to you.” Then she would be silent.

My margin note: “stunned silent by disgust at horny Boomer frankness.”

Another passage relates a joke told by a rabbi on a first date.

Seated at the cafe, Kalman tried to relax by telling a joke.

“So there are these two old Jews who are obsessed with knowing what happens after you die,” he said, putting his fork into a slice of coconut cream pie. “They swear a solemn oath that, God forbid, whoever dies first will stop at nothing to contact the one who survives. Moishe dies. Yonkel sits shivah, says kaddish for eleven months..”

“Shivah? Kaddish?”

“Jewish mourning rituals. But nothing happens. Then, after a few years, one evening the phone rings. It’s Moishe!

“Moishe, is that you?’

“‘Yes, it’s me, but I can’t talk long.”

“So then quick, tell me, what’s it like?” asks Yonkel.

“Oh, it’s wonderful here. I sleep late, have a big breakfast, and then I make love. If the weather’s nice, I usually go out into the fields and make love again. I come back inside for lunch and take a nap. Then I go out into the fields and make love, sometimes twice. I have a big dinner, and then, most evenings, I go out into the fields again and make love. Then I come inside and go to sleep.

“And that’s heaven!?” Yonkel gasps.

“Heaven?” says Moishe. “Who said anything about heaven?

I’m a rabbit in Minnesota!'”

What a relaxing first date joke! And how was the joke received? Did she scream or run away? Nope.

It worked. Dr. Isabel Benveniste demurely covered her mouth with her napkin and laughed; her eyes twinkled behind her thick glasses.

Demurely.

This love interest, if you can’t tell, is a stock Boomer favorite: the bombshell-hottie-disguised-as-a-nerdy-librarian. In this case she is an astrophysicist who stole the rabbi protagonist’s heart while delivering a lecture on the origins of the universe.

She looked taller, more severe, off the podium. What little makeup she wore was perfect; her black curly hair fell flawlessly about her face.

The rabbi, it turns out, was inspired to became a Kabbalist after a mystical experience in an observatory.

Kalman Stern just stood there gazing through that opening in the dome and into the starry firmament. He repeated his teacher’s words: a point of light . . . containing everything yet to come.

And for just one moment, the heavenly lights reciprocated his affections: They condensed themselves like a torrent gushed through the narrowing walls of a sluice. They slid through the slit in the nine-inch Alvan Clark refractor dome’s open mouth.

They squeezed themselves into a single spark of moistened light and planted a silent kiss on the lips of Kalman Stern. He swallowed hard and blinked, trying to clear his vision. He never told anyone about it. Even if he had wanted to, he didn’t know how.

He wasn’t aware of it then, of course, but that was also when he became a Kabbalist.

I swear, if I can force myself this through this writing and drag myself all the way to the end of this book, it will be a miracle. It will be nothing less than a new and irrefutable proof of the existence of God.

The problem is, there’s some good information — even profound insights in this book. It’s hellish indignity, but, in my life, that’s where wisdom hides out — under steaming heaps of cringe.

Gerundity

We can think of metaphysics our understanding of what is really real, behind the world of phenomena.

We can also think of metaphysics as something we do. Metaphysics is an action we perform when we need to integrate a subjective experience into absolute reality as we conceive it. (This is often called “objective” reality, See note below on why I do not.)

For some particular object of some particular experience to be part of reality it must find its place in an ontology rooted in some particular metaphysic. I’ve called this “touching base”. Say, for instance, a person has an emotion or intuition and wants to account for what it is. Is it an epiphenomenon of neurobiology? Is it a message from the spirit world? Is it a manifestation of an archetype? Is it the detection of a moral principle. What do we do to give this wisp of subjectivity the dignity of realness to ourselves and to those who know what we know? What substrate or matrix do we link it up to?

We can pragmatically establish the meaning of a metaphysics by its practical consequence. What kind of ontological grounding operation do you do in order to situate a subjective experience within your best conception of absolute reality? That is the pragmatic meaning of your metaphysic.


In my library life, I’m thinking about process philosophy.

In my office life, I’m thinking about service-dominant logic (SDL).

I can’t find where I wrote this, but I swear this is an older thought: service-dominant logic is an alternative business metaphysic.

Both of these philosophies/frameworks volatilize things into interactive dynamics, and blur the boundaries between noun and verb.

They put relations at the heart of reality.

Every noun is a gerund in disguise.

Light, photon and wave.

Being. The doing of am.

YHWH: was-am-will-be


Note: Some people have a metaphysic that is identical to their ontology. Others have a metaphysic that transcends their ontology. For the former, absolute reality is (or often is) objective reality. For the latter, objective reality and absolute reality are different.

Habermas’s simple move

I love Habermas’s simple move: to separately and comparatively analyze the propositional and performative dimensions of communication, in order to illuminate the universal norms implicit in all communicative acts.

When what is done in a speech act (an implied performative truth) contradicts what is said in its content (an explicit propositional truth), we encounter what’s known as a performative contradiction.

A famous example: “This sentence is a lie.” The act of asserting implies truthfulness, but the content denies it—undermining itself through its own performance. A more familiar example: “I don’t care what you think of me.” If that’s true, why say it? The act of saying it appeals to the very judgment it pretends to reject.

Performative contradictions throw tacit performative truths into sharp relief—truths that otherwise slip by unnoticed. They function like ethnomethodological breaching experiments: by violating invisible norms, they make them visible. Communicative acts, it turns out, are ethnomethods—and if Habermas is right, they are universal ones. “Anthropomethods”, maybe?

Habermas’s mature project was to uncover and clarify the norms presupposed by all communicative practice—not what we say about norms, but what our saying always already performs. In doing so, he sought universal norms of communicative rationality—structures that transcend the relativity of our claims by grounding them in the conditions that make understanding itself possible.


Vulgar appropriation of philosophical language drives me nuts. People love the mouthfeel of philosophical terms, but they cannot tolerate the practical consequences of actual understanding. So they make words forged expressly to say something new and elusive and different and level them down to say something old and obvious and same. (And don’t even get me started on appropriation of design language, which is, essentially the leveling down of practical phenomenological language to please the ontic palate.)

“Performative” is a particularly egregious example — one that reverses its intended meaning. In vulgar usage, it’s taken to mean theatrical, inauthentic: the speaker is just being an actor before an audience.

But in Habermas’s framework, and in the philosophical tradition in which Habermas works, performativity is not about deception, it is about action. What is performed in communication is not less real than what is said — it is more real.

Speech actions speak louder than words.

Grampy musings

It is a supreme privilege and joy to help initiate a baby into human society.

It is intrinsically good on every level — spiritually and emotionally, of course, and even somehow physically — but it is also intellectually fascinating. “Early childhood development” stops being a remote body of knowledge, and becomes experience-near insights, rooted in prolonged firsthand experience. A passage like this makes immediate sense, because the experiences to which it refers are so fresh:

Pointing is not a solitary act by which one actor or thinker confronts the world, identifying objects by means of this act. Rather, the act of pointing implies not only that there is something else to be pointed to, but also that there is someone else to perceive the pointing. Pointing is a fundamental social process. Pointing only makes sense within a social relationship: if a subject is pointing at something to another subject.

Although Kamlah and Lorenzen mention this fundamental sociality of pointing, the impact of this insight is accounted for in sociology, in the sociology of knowledge and in science studies rather than in philosophy. The communicative act of pointing makes it clear, in fact, to what extent knowledge and thinking are social: pointing is founded on a relationship between at least two subjects, who refer to a third element in a way that makes sense to them. If we consider pointing to be a basic act, we must also consider its basic sociality. It is the most general thesis of this book that communicative actions, such as pointing, are the fundamental social process by which society and its reality is constructed.

But now I am thinking about the full range of nonverbal communication that occurs between a baby and adults. Deictic communication (pointing, referring), including indicating actions to imitate, are part of it. But equally important is expression of physical and emotional states — most importantly to indicate needs.

All the talking we eventually learn to do is rooted in a primordial unity of physicality, of feeling, of perceiving, of relating — a world we inhabit a few painful, precious years before language develops to mediate it, tame it — and unfortunately, all too often, to eclipse and replace it.

The key to living in reality (versus our conceptualizations of reality) is maintaining connection with the primordial chaos, and keeping language in this role of mediator, and not as something that dominates or eclipses our participation in this strange, very physical, very intuitive participatory relationship we have with what William James famously called “one great blooming, buzzing confusion”.

The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing confusion; and to the very end of life, our location of all things in one space is due to the fact that the original extents or bignesses of all the sensations which came to our notice at once, coalesced together into one and the same space.

Craft brings us back to materials, so we can hear the buzzing, blooming chaos to which we and all things belong, long before we slice thing up into subjects and objects and qualities — light and dark, upper and lower, dry and wet, animal and mineral — each labeled with a name and therefores — all stacked up and ready to be inventoried, quantified, utilized and managed.

Hyperorder metaphysics

I remain enamored with Habermas’s framing of system versus lifeworld.

It seems to me that our popular philosophy seeks to project a semi-concealed systems-metaphysic beneath our lifeworld. We want to uncover the secrets of this system in order to understand finally how this semi-chaotic lifeworld emerges.

The philosophers I gravitate toward do the opposite. They like me, see the lifeworld as primary, and that systems are what we humans abstract and formalize from this semi-chaos in order to locally and temporarily order it for ourselves. There is no secret system behind the mess, but a hyper-ordered reality that affords many potential but always-partial orderings.

According to this broad school of thought, science is an organized, intricate, precise collaborative lifeworld activity that generates systems meant to explain the lifeworld as comprehensively as possible, and which appears to transcend the lifeworld, while never actually transcending, except in the metaphysical imagination of the scientistic faith.


By the way, I view chaos as hyperorder, not disorder. Hyperorder is what happens when diverse possibilities of ordering coincide so densely and incommensurably that we are unable to pick out an ordering to make sense of whatever concerns us.

My metaphysic is a metaphysic of chaotic hyperorder. Reality is inexhaustibly surprising. However much order we find in it, that order is the furthest thing from ultimate truth.


A prettier way of saying what I’m trying to convey would be to reverse Camille Flammarion’s famous woodcut “L’Atmosphère: Météorologie Populaire” so that he when crawls up to the edge of a uni-ordered universe and pokes his head through its outer edge he beholds myriad overlapping uni-ordered universes in psychedelic communion.

Or maybe the protagonist keeps on crawling, and thrusting his head through successive spheres of reality, once, twice, myriad times — until reality finally thrusts itself through his head, and he finally realizes that all these experiences of transcendence were just varieties of immanence — an ontological kaleidoscope.

Collective madness

“Madness is rare in individuals — but in groups, parties, nations, and ages it is the rule.”

Why? Individuals constantly check their perceptions, ideals, norms, opinions, beliefs and plans both with their fellows individuals and with the concrete data of life. This prevents their ideas from becoming fully self-referential, self-reinforcing and self-fulfilling and alienated from life outside the mind.

But with groups, these very checks against individual madness generate collective madness. Group-think and group-feel permeate the beliefs and attitudes of all its individual members. When the individual tries to reconcile their own individual perceptions, conceptions, intuitions and pangs of conscience with those of their peers, they find that they are alone and out of step. Since few people put much work into testing their own beliefs or trying to get their beliefs to integrate in any coherent way, so most people just assume their trusted sources are trustworthy and that their integration with the people around them will produce personal integrity. Instead of challenging the norms around them, they assimilate. They just go with what their peers think, feel, say and do, and assume all critique of these things from groups or individuals are invalid for some known or unknown reason. And when most of what we know about the world comes from content generated by our own group, it is easy to inhabit a largely imagined world instead of a partially imagined one that must answer to controversy and the chaos of reality.

All it takes is readiness to believe in the exceptional virtuousness of one’s own group and the exceptional viciousness of those who oppose you, and a dash of ordinary human incuriosity, and collective madness is inevitable within two generations.

This is one of those times where anyone who is not actively working to keep their minds in contact with mind-transcendent reality is almost certainly floating off in one or another bubble of collective solipsism.

Machloket l’shem shemayim

I’m talking with a friend about machloket l’shem shemayim, perhaps the one most crucial value that makes me feel Jewish and which makes a person feel Jewish to me, regardless of whether that person is secular or observant:

There is a practice of truth-finding among us, based on the infinitude of God, where we seek transcendence together, in our own finite being, through disagreement and reconciliation. That practice is Talmudic, but we practice it in marriage, friendship, work, everywhere we can.

No mind is expansive enough to contain God’s truth, but we can approach God by disagreeing well, in the right faith, in ways that allow us to expand our truths together, toward God.

This is what Habermas strives to work out in his theory of communicative action. This is holy stuff!

From an email

“Hell is other people. But hell is also loneliness. Artificial intelligence gives us the Malkovich Malkovich Malkovich reality we really crave. Endless novelty, but safely unsurprising, the entertainment sweet-spot. Propaganda from our own secret selves — selves so secret that they aren’t even ours. Narcissus’s reflected view was eclipsed by his own head, but now, with postpostmodern digital refraction, we can overcome ego and annihilate bias and see through the backs of our own heads like gods.”


Update:

“My interest is dissolution and recrystallization of selfhood and along with change of what is obvious to us. In other words, I’m interested in conversion events.

Human-centered design demands that its practitioners undergo conversion events, usually minor ones, but not always. We designers must transform ourselves from people who cannot understand how others think, feel and behave to people for whom these thoughts feeling and behaviors make perfect, obvious sense. This can be a terrifying, painful process.

An ideology is a fortified circularity, dedicated above all to prevention of conversion — to exposure to what might trigger a conversion. Ideologies condone, even encourage, suppression and silencing and intimidation and ostracism of whatever triggers dread of conversion.

By now most people have become ideologues of one kind or another. Ideologues cannot change, so they cannot design. Ask a random designer what they care most about. You’ll always get variations on the same answer. Today’s designers do research with extreme thoroughness and rigor, but this research is always a knowing-about exercise that leaves their ideology intact. The findings are extraordinarily detailed and boring. When is the last time you felt inspired by design research findings? The research makes sense, but it is epiphany-free. It was conducted in a way that precludes epiphanies and conversions.

When the design industry dedicated itself to serving the one true and just ideology, it lost what matters most. This is my hostility to progressivism. It wants to be God, but it’s just another shitty little mass-autism — a solipsistic egregore. It killed design.”

Constrained excellence

I’ve noticed that many younger designers strive for a kind of excellence in design that causes a lot of strain and imbalance. Idealism and scrupulousness leads them to believe that their job as a designer is to make the best possible artifact — the most polished, most thorough, most comprehensive, most rigorous, most compelling, most airtight artifact imaginable — and the better that artifact is, the better job they’ve done as a designer. They believe that if they can possibly do anything more to improve it, they should.

But there is another way to define excellence that is more professionally sustainable, which judges excellence by how well a design problem can be solved within the constraints of the project. By this standard, a designer who goes above and beyond and exceeds the constraints of the project by working nights and weekends has actually done a worse job as a designer than one who worked within the constraints and made the smartest tradeoffs to solve the problem as completely as possible within those constraints.

One dramatic example of this standard is prototypes. The best prototypes do no less, but also no more than necessary to serve as a stimulus for learning. A novice will mistake an over-developed, over-produced prototype as better than a crude one that is perfectly adequate for the job of testing.

For years, I’ve hung a picture of a very famous prototype done at Ideo on my wall to remind myself of the prototype exactly-enough-and-no-more ethic.

IDEO 1

As you can see, this image is really crappy. I think someone took a picture of it with an early digital camera. And I suppose we could argue that this crappy digital image is exactly-enough-and-no-more to get the concept of a prototype across. IF you want to argue that, touché.

But my OCD inspired me to actually reproduce this prototype in a lovely shadowbox, which now sits exactly-proudly-enough-and-no-more in the lobby of Harmonic’s studio.

Another example of this ethic, applied to design research, is the great Erika Hall’s brilliant and funny guide to smart research design, Just Enough Research. Erika, if you ever happen to see this, I’m still waiting for the sequel: Just Enough Design.

And for philosophy fans, I should also mention that this line of thought can be seen as belonging to the Aristotelian tradition of ethics — ethics of the mean. According to Aristotle, virtue sits in the balance point between vices of deficit and vices of excess.

Too much of any good thing, however good it might be, becomes bad.

I hope I have not just committed a vice of excessive wordcount. I’ll stop here.

Communicative action of Talmudic dialogue

As I dig deeper into Habermas’s theory of communicative action, I find that it articulates my strongest moral convictions. Like Habermas, I am unable to see these core norms as relative. Of course, I can pretend to doubt it with my philosophy, but I cannot doubt these things with my heart.

In them, I also recognize the Talmudic discursive practices and behind them the moral ideal that I value above all else in Judaism.

What is religion?

Religion is intentional cultivation of relationship between one’s finite self and the infinite, who is understood as the ground of being, the root of morality — infinite, transcendent, partially knowable, but essentially incomprehensible.


Pity my poor friend Darwin. I’ve been slacking at him about religion all morning. But he’s smart, and smart ears are inspiring!

Prayer is not, in Habermas’s terms, an instrumental action. It is not the cause of an effect. It is a communicative action, meant to cultivate social connection.

Social connection between our finite selves and an infinite self of whom we are part, but within that, our fellow selves. It is a speech act meant to summon solidarity.

I’m obsessed with the limits of objective thought, how objective thought stands upon other modes of cognition that can do things beyond objectivity, and what happens when we invalidate them and try to live with objectivity alone.

Objectivity is something we do, it is not something that is just there to perceive and think about. There is no objective reality, only objective truth. I think this used to be a controversial belief, but I think that is now mainstream, albeit in vulgarized form. But I think most forms of constructivism is still trapped in objectivism (only what can be taken as an intentional object can be thought). But I think the doing of objectivity is not objectively knowable.

We believe that we can construct new factual edifices and call them true until we are habituated to that new construction of truth. But we cannot sincerely take many constructions for true. Just as some designs are intuitive and effortless to learn and others are unintuitive and must be effortfully learned, recalled and made habitual before any skillful use is possible, some constructions can be intuitively, spontaneously known and, once seen, are re-seen and cannot be unseen. These are transformative understandings, and that is what I look for in what I read and my own goal in what I try to write.

Religion is largely a matter of how we think and relate as subjects. The objective content of our thinking, and our thoughts about our relationships and those we relate to, is secondary to the subjective acts of relating.

But those who reduce all to object in order to comprehend, reduce the relationship to an incomprehensible God into an objectively believed in “God”.

A similar operation happens in psychology, or at least vulgarized psychology, where unwanted thoughts are the surfacing of objective beliefs that were already there under the surface, rather than artifacts of subjective motions that constantly reproduce what those motions produce. Most racism is attributable to racist habits of thought, and attempts to claim one thinks otherwise are subjectively dishonest, self-alienating and eventually comprehensively alienating. 

Objectivity is something that is done and produced by something which in itself is not objectively knowable. We can objectively know what it does, we can objectively know some of how it does it, and we can objectively know what is seems to not do, but we cannot objectively know the knower. In other words we can know about subjectivity, but subjects are known in a way different from objects. Subjects are known through participation in subjectivity, much of which is (confusingly!) objective experiencing and knowing of the world. I’ve said before that all subjects have their own objectivity. (Actually, what I really said before was that every subject is an objectivity, but subjects are more than only that.

Against philosophy?

This post started out one thing and became another.

I started off thinking about subjective honesty, and how much I value it.

Then something took a wrong turn and I ended up more or less longwindedly paraphrasing Issac Brock:

Everyone’s afraid of their own lives
If you could be anything you want
I bet you’d be disappointed, am I right?
No one really knows the ones they love
If you knew everything they thought
I bet that you would wish that they’d just shut up

I’ve left it raw.


It is much harder to prove subjective dishonesty than objective dishonesty.

And because it is so much harder to prove, it is much easier to justify refusing to prove it.

As with so many matters, the rules of private conduct differ from public conduct. In private life, a mere suspicion that a person is subjectively dishonest is sufficient to cut off discourse.

But in public life, such matters must be rigorously established.

(This is one motivation behind my current revived interest in Habermas.)


Years ago I had a friend who I believed fell into a circular logic and lost contact with concrete reality. He lived a strange life that allowed him to avoid all real participation in any organization. He had no experience of organizational life, of playing an organizational role with defined responsibilities, spheres of authority and obligation, interacting with others with their own defined roles. He had no experience at all negotiating within organizational constraints to find alignment and to make progress toward shared goals.

He was, as far as I could tell, entirely unaware of the kinds of reasoning one must do to succeed in such organizational efforts. So his notions about organizations and how they function was based on fiction and ignorant speculation. This would have been perfectly harmless if he simply lived his life apart from organizations, ignoring them and focusing on what he knew firsthand, which as far as I could tell was made up mostly of carefully compartmentalized individual relationships with no burden of mutual responsibility.

Alas, his worldview was hyperfocused on organizations, nefarioys ones who were doing all kinds of nefarious things, in pursuit of even more nefarious goals.

And even that would have been fine, had he been capable of conversing about other topics. But he was not. I was unable to find any topic of conversation that he would not, within five minutes, redirect directly into a conversation about what nefarious organizations were doing.

Eventually, I told him that I believed he was mentally ill. And not only mentally ill. He mentally ill in a very, well — nefarious way.

He demanded proof. He said this was a serious accusation, and that such accusations demanded justification. And the only way to justify it was to show that his factual assertions were not factual, but delusional. Because if his facts were grounded in reality, it was I, not he, who was deluded. And this was precisely what was at issue. And there was only one way to find this out. It turns out that I was morally obligated to discuss his conspiracy theories even more thoroughly — exhaustively, in fact — examining and disproving the innumerable facts that constituted his theories, and addressing the innumerable finer points, qualifications, epi-explanations and counterarguments.

I could either do that, or I could retract my statement that I believed there was something deeply and darkly wrong with him. Except I didn’t want to discuss those theories at all, let alone exhaustively, and I still believed something had gone horribly wrong with his faith and his thinking.

I can’t, in good faith, retract that statement. What I should have done instead is, in good taste, not shared that belief in the first place. I should have done what most normal, polite, conflict-avoidant people do when they recognize that the person they were pleasantly chatting with is a conspiracy theorist.

But philosophical argument is a deliberate suspension of such discursive etiquette.

Instead of suppressing our beliefs about other people’s beliefs and foregrounding our common ground, we plough up our deepest disagreements, which typically concerns precisely what holds our souls in shape — the integrity of our personal faiths.

Sometimes I suspect philosophy is a terrible fucking idea. Sometimes, today for instance, I believe philosophy is essentially rude.

If we want subjective honesty, maybe we should just leave others out of it and make it an inward practice. Outwardly, we should just settle for a polite objective honesty.


So how in hell can we ever have deliberative democracy? I am terrified that Hobbes might be right, and that deliberative democracy is a leviathan-concealing shell game. Can this game be played without an absolute referee who isn’t each of us, each fighting to be referee?


In this game contestants compete to become the game’s referee. We don’t try to become referee in order to win the game. We win the game in order to become referee.

Three hypotheses

1.

I suspect that leftists do not believe in evil. Or rather, whatever seems evil is an epiphenomenon of injustice. Evil is what ensues when a person or group is treated unjustly for too long.

2.

I suspect that narcissism is one possible consequence of misunderstanding subjectivity, which mistakes the intentional object “me” for the intending subject “I”. I believe this helps explain why people on the autism spectrum display narcissistic tendencies when they discover that they have a self that can be examined, analyzed, modified, redefined and so on. According to some, autism is subjectivity-blindness, and so the self that is discovered is not really an egoic center (an I-point from which the world is taken as real), but an egoic focus (a me-thing that is an object of all-consuming fascination).

Which reminds me of a third point…

3.

I’ve noticed a lot of folks in the design profession who talk about things like humanity-centered design. In this usage, I see a confusion of the very meaning of “centered”. Any centeredness is a taking of a persepective — a seeing from some standpoint that can actually, literally, be seen from. This is an entirely different kind of reality that something that can be looked at or thought about in objective terms. Humanity has no single perspective, and so this reveals a blindness, which I suspect is an autistic blindness.

The fascinating thing about autism is that it produces at least one self-centeredness, which is an incapacity to temporarily adopt another egoic center. That is, it cannot empathize. Not that it doesn’t try, but its attempts are attempts to generate emotions stimulated by knowing about me-objects. Most vulgar empathy — including that of many designers — are of this nature. The other “self-centeredness”, the more infamous one, where every conversation comes back to me and what I feel and I think, and what I’ve done and what others think of me, me, me should not be called self-centered, but self-focused. This is narcissism.

I need to do some research to see what work has been done on this I-me confusion and its practical consequences.

Ontic filter

“Pictures or it didn’t happen.”

In business: “Numbers or it didn’t happen.” Only what is quantifiable is real.

For wordworlders: “Explicit language or it didn’t happen.” Only what can be said clearly and argued is real.

For scientism: “Repeatable demonstration or it didn’t happen.” Only what can be technologically reproduced is real.

But even deeper, and common to all: objectivity or it isn’t real. This is the deeper objectivism. Radical objectivism confuses “objective reality” with absolute reality, and treats the two as synonymous.

An opposing view says that any finite, definable entity is only an actualized possibility of reality which is simultaneously both object and subject, and neither. Neither: apeiron.

Articulation of preconceptual awareness

If I did not already own a lovely hardback copy of Abraham Joshua Heschel’s God In Search of Man, I’d be desperate to find a copy for my sacred library:

It is the assertion that God is real, independent of our preconceptual awareness, that presents the major difficulty. Subjective awareness is not always an index of truth. What is subjectively true is not necessarily trans-subjectively real. All we have is the awareness of allusions to His concern, intimations of His presence. To speak of His reality is to transcend awareness, to surpass the limits of thinking. It is like springing clear of the ground. Are we intellectually justified in inferring from our awareness a reality that lies beyond it? Are we entitled to rise from the realm of this world to a realm that is beyond this world?

We are often guilty of misunderstanding the nature of an assertion such as “God is.” Such an assertion would constitute a leap if the assertion constituted an addition to our ineffable awareness of God. The truth, however, is that to say “God is” means less than what our immediate awareness contains. The statement “God is” is an understatement.

Thus, the certainty of the realness of God does not come about as a corollary of logical premises, as a leap from the realm of logic to the realm of ontology, from an assumption to a fact. It is, on the contrary, a transition from an immediate apprehension to a thought, from a preconceptual awareness to a definite assurance, from being overwhelmed by the presence of God to an awareness of His existence.

What we attempt to do in the act of reflection is to raise that preconceptual awareness to the level of understanding.

Confessions of a chicken hawk

This one is difficult.

I was driving around the Emory campus yesterday and saw a sign for Oxford Road. It made me want to hear Bob Dylan’s song “Oxford Town”. This song was especially relevant to me right now because I am in the middle of a book by Abraham Joshua Heschel, who was a Jewish leader in the civil rights movement. “What do you think of that, my friend?” I think what you do, Bob. All decent people must think that. We fucking know it.

I decided to listen to the “Free Wheelin’ Bob Dylan” album from the beginning.

The third song on that album is “Masters of War”. I tried to place myself in 1963, when this song and this attitude was new. It was difficult to do. The countercultural ethos has followed the well-worn path of religious degradation, from the shock of world-transformative revelation, to inspired movement, to new vital establishment, to commonsense conventional wisdom, to the default doctrine for all educated Americans, to ready-made attitude equipped with bromides and logical formulas.

And in this last, most degraded state, any war of any kind is automatically viewed as illegitimate, unnecessary and the manufactured product of masters of war trying to get rich on death.

The response to any war is a “surely there is another way” recited as automatically as a libertarian’s “deregulate it” or progressivist “institutional racism” or “cognitive bias” as all-purpose diagnoses and remedies.

They aren’t even responses. They are strings of words erected as a barrier to engaging the problem. I realize I am paraphrasing Hannah Arendt:

“Clichés, stock phrases, adherence to conventional, standardized codes of expression and conduct have the socially recognized function of protecting us against reality, that is, against the claim on our thinking attention that all events and facts make by virtue of their existence.”

The particular reality that from which counterculture fundamentalists want protection is moral obligation.

We hate the idea — I, personally, hate the idea, and have always hated it — that there are times when people are obligated to kill and risk death to protect our own people from those who want us to suffer and die.

And, like it or not, people really do exist who actively want the suffering and annihilation of other people. This desire for suffering and annihilation of others is what evil is.

Suffering and annihilation are what war is about. But for evil, suffering and annihilation is the whole purpose, and war is its own end. Part of the joy of evil is forcing others to play their war games, and to taste violence, to face seduction of violence, in the effort to stop its spread. And if they can drag their enemies into evil with them, or create such confusion that people lose the ability to see the difference, so much the better.

Of course, masters of war want to paint every conflict as a simple Good versus Evil struggle. They are despicable moral manipulators. But to abuse this truth by using it to claim the opposite — that there is never Good versus Evil conflict — is hardly better. It is the evil of equating defense against evil with evil. It is the evil of denying evil, and relativizing everything so thoroughly that we willfully ignore evil and allow it to flourish.


Most left-leaners want that to not be true, or to treat this problem as one they can evade. They try to complicate the situation, blur it, muddy it, distance themselves from it. “I can’t understand something this complex.” “I cannot do anything about this, so it is not my problem.” “This is the outcome of a long and tragic process, so we cannot assign blame.” Or “Life is simply tragic. It will never not be tragic. So let it be tragic.” As if simply calling life tragic allows us to transcend the tragedy and look at it from above as mystical spectators and not within as participants. This latter is Christian nihilism, and this mystical nihilism can linger on long after Christian doctrine evaporates from the soul. Faith outlives its beliefs.

They all boil down to “I don’t want to care.” We might say “I don’t give a fuck” with punk bluster, as if we are proud of it, as if we are shameless. Hopefully we are lying, because dishonesty is less damning than genuine shameless selfishness.

How do I know any of this? Because I am guilty of it myself. I was even more guilty in the past, when I was young and draft eligible. I have never been brave enough for combat. I have always been mortified of war. That is shameful.

But I am even more ashamed to pretend shirking one’s war duty is not shameful. Most shameful of all is withholding gratitude and admiration of soldiers who do answer the call and risk their lives to defend their families, their people and all they hold sacred.

Of course, if nothing is sacred, there is nothing to admire or despise. There is no cause for pride or shame. Intellectually honesty knows better. We fucking know better, most of all when we refuse to admit it.