Friday, July 21, 2017

New Journal! The Journal of Science Fiction and Philosophy

This looks very cool:

Call for Papers

General Theme

The Journal of Science Fiction and Philosophy, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.

The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.

We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie Gattacca to introduce the ethics of genetic technologies, or The Island of Dr. Moreau to discuss personhood)? Turn that class into a paper!

Yearly Theme

Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal. The Yearly Theme for 2017 is All Persons Great and Small: The Notion of Personhood in Science Fiction Stories.

SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?

The Journal accepts papers year-round. The deadline for the first round of reviews, both for its general and yearly theme, is October 1st, 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

Wednesday, July 19, 2017

Why I Evince No Worry about Super-Spartans

I'm a dispositionalist about belief. To believe that there is beer in the fridge is nothing more or less than to have a particular suite of dispositions. It is to be disposed, ceteris paribus (all else being equal, or normal, or absent countervailing forces), to behave in certain ways, to have certain conscious experiences, and to transition to related mental states. It is to be disposed, ceteris paribus, to go to the fridge if one wants a beer, and to say yes if someone asks if there is beer in the fridge; to feel surprise should one open the fridge and find no beer, and to visually imagine your beer-filled fridge when you try to remember the contents of your kitchen; to be ready to infer that your Temperance League grandmother would have been disappointed in you, and to see nothing wrong with plans that will only succeed if there is beer in the fridge. If you have enough dispositions of this sort, you believe that there is beer in the fridge. There's nothing more to believing than that. (Probably some sort of brain is required, but that's implementational detail.)

To some people, this sounds uncomfortably close to logical behaviorism, a view according to which all mental states can be analyzed in terms of behavioral dispositions. On such a view, to be in pain, for example, just is, logically or metaphysically, to be disposed to wince, groan, avoid the stimulus, and say things like "I'm in pain". There's nothing more to pain than that.

It is unclear whether any well-known philosopher was a logical behaviorist in this sense. (Gilbert Ryle, the most cited example, was clearly not a logical behaviorist. In fact, the concluding section of his seminal book The Concept of Mind is a critique of behaviorism.)

Part of the semi-mythical history of philosophy of mind is that in the bad old days of the 1940s and 1950s, some philosophers were logical behaviorists of this sort; and that logical behaviorism was abandoned due to several fatal objections that were advanced in the 1950s and 1960s, including one objection by Hilary Putnam that turned on the idea of super-spartans. Some people have suggested that 21st-century dispositionalism about belief is subject to the same concerns.

Putnam asks us to "engage in a little science fiction":

Imagine a community of 'super-spartans' or 'super-stoics' -- a community in which the adults have the ability to successfully suppress all involuntary pain behavior. They may, on occasion, admit that they feel pain, but always in pleasant well-modulated voices -- even if they are undergoing the agonies of the damned. The do not wince, scream, flinch, sob, grit their teeth, clench their fists, exhibit beads of sweat, or otherwise act like people in pain or people suppressing their unconditioned responses associated with pain. However, they do feel pain, and they dislike it (just as we do) ("Brains and Behavior", 1965, p. 9).

Here is some archival footage I have discovered:

A couple of pages later, Putnam expands the thought experiment:

[L]et us undertake the task of trying to imagine a world in which there are not even pain reports. I will call this world the 'X-world'. In the X-world we have to deal with 'super-super-spartans'. These have been super-spartans for so long, that they have begun to suppress even talk of pain. Of course, each individual X-worlder may have his private way of thinking about pain.... He may think to himself: 'This pain is intolerable. If it goes on one minute longer I shall scream. Oh No! I mustn't do that! That would disgrace my whole family...' But X-worlders do not even admit to having pains" (p. 11).

Putnam concludes:

"If this last fantasy is not, in some disguised way, self-contradictory, then logical behaviourism is simply a mistake.... From the statement 'X has a pain' by itself no behavioral statement follows -- not even a behavioural statement with a 'normally' or 'probably' in it. (p. 11)

Putnam's basic idea is pretty simple: If you're a good enough actor, you can behave as though you lack mental state X even if you have mental state X, and therefore any analysis of mental state X that posits a necessary connection between mentality and behavior is doomed.

Now I don't think this objection should have particularly worried any logical behaviorists (if any existed), much less actual philosophers sometimes falsely called behaviorists such as Ryle, and still less 21st-century dispositionalists like me. Its influence, I suspect, has more to do with how it conveniently disposes of what was, even in 1965, only a straw man.

We can see the flaw in the argument by considering parallel cases of other types of properties for which a dispositional analysis is highly plausible, and noting how it seems to apply equally well to them. Consider solubility in water. To say of an object that it is soluble in water is to say that it is apt to dissolve when immersed in water. Being water-soluble is a dispositional property, if anything is.

Imagine now a planet in which there is only one small patch of water. The inhabitants of that planet -- call it PureWater -- guard that patch jealously with the aim of keeping it pure. Toward this end, they have invented technologies so that normally soluable objects like sugar cubes will not dissolve when immersed in the water. Some of these technologies are moderately low-tech membranes which automatically enclose objects as soon as they are immersed; others are higher-tech nano-processes, implemented by beams of radiation, that ensure that stray molecules departing from a soluble object are immediately knocked back to their original location. If Putnam's super-spartans objection is correct, then by parity of reasoning the hypothetical possibility of the planet PureWater would show that no dispositional analysis of solubility could be correct, even here on Earth. But that's the wrong conclusion.

The problem with Putnam's argument is that, as any good dispositionalist will admit, dispositions only manifest ceteris paribus -- that is, under normal conditions, absent countervailing forces. (This has been especially clear since Nancy Cartwright's influential 1983 book on the centrality of ceteris paribus conditions to scientific generalizations, but Ryle knew it too.) Putnam quickly mentions "a behavioural statement with a 'normally' or 'probably' in it", but he does not give the matter sufficient attention. Super-super-spartans' intense desire not to reveal pain is a countervailing force, a defeater of the normality condition, like the technological efforts of the scientists of PureWater. To use hypothetical super-super-spartans against a dispositional approach to pain is like saying that water-solubility isn't a dispositional property because there's a possible planet where soluble objects reliably fail to dissolve when immersed in water.

Most generalizations admit of exceptions. Nerds wear glasses. Dogs have four legs. Extraverts like parties. Dropped objects accelerate at 9.8 m/sec^2. Predators eat prey. Dispositional generalizations are no different. This does not hinder their use in defining mental states, even if we imagine exceptional cases where the property is present but something dependably interferes with its manifesting in the standard way.

Of course, if some of the relevant dispositions are dispositions to have certain types of related conscious experiences (e.g., inner speech) and to transition to related mental states (e.g., in jumping to related conclusions), as both Ryle and I think, then the super-spartan objection is even less apt, because super-super-spartans do, by hypothesis, have those dispositions. They manifest such internal dispositions when appropriate, and if they fail to manifest their pain in outward behavior that's because manifestation is prevented by an opposing force.

(PS: Just to be clear, I don't myself accept a dispositional account of pain, only of belief and other attitudes.)

Thursday, July 13, 2017

THE TURING MACHINES OF BABEL

[first published in Apex Magazine, July 2017]

In most respects, the universe (which some call the Library) is everywhere the same, and we at the summit are like the rest of you below.  Like you, we dwell in a string of hexagonal library chambers connected by hallways that run infinitely east and west.  Like you, we revere the indecipherable books that fill each chamber wall, ceiling to floor.  Like you, we wander the connecting hallways, gathering fruits and lettuces from the north wall, then cast our rinds and waste down the consuming vine holes.  Also like you, we sometimes turn our backs to the vines and gaze south through the indestructible glass toward sun and void, considering the nature of the world.  Our finite lives, guided by our finite imaginations, repeat infinitely east, west, and down.
But unlike you, we at the summit can watch the rabbits.
The rabbits!  Without knowing the rabbits, how could one hope to understand the world?

#

The rabbit had entered my family's chamber casually, on a crooked, sniffing path.  We stood back, stopping mid-sentence to stare, as it hopped to a bookcase.  My brother ran to inform the nearest chambers, then swiftly returned.  Word spread, and soon most of the several hundred people who lived within a hundred chambers of us had come to witness the visitation -- Master Gardener Ferdinand in his long green gown, Divine Chanter Guinart with his quirky smile.  Why hadn't our neighbors above warned us that a rabbit was coming?  Had they wished to watch the rabbit, and lift it, and stroke its fur, in selfish solitude?
The rabbit grabbed the lowest bookshelf with its pink fingers and pulled itself up one shelf at a time to the fifth or sixth level; then it scooted sideways, sniffing along the chosen shelf, fingers gripping the shelf-rim, hind feet down upon the shelf below.  Finding the book it sought, it hooked one finger under the book's spine and let it fall.
The rabbit jumped lightly down, then nudged the book across the floor with its nose until it reached the reading chair in the middle of the room.  It was of course taboo for anyone to touch the reading chair or the small round reading table, except under the guidance of a chanter.  Chanter Guinart pressed his palms together and began a quiet song -- the same incomprehensible chant he had taught us all as children, a phonetic interpretation of the symbols in our sacred books.
The rabbit lifted the book with its fingers to the seat of the chair, then paused to release some waste gas that smelled of fruit and lettuce.  It hopped up onto the chair, lifted the book from chair to reading table, and hopped onto the table.  Its off-white fur brightened as it crossed into the eternal sunbeam that angled through the small southern window.  Beneath the chant, I heard the barefoot sound of people clustering behind me, their breath and quick whispers.
The rabbit centered the book in the sunbeam.  It opened the book and ran its nose sequentially along the pages.  When it reached maybe the 80th page, it erased one letter with the pink side of its tongue, and then with the black side of its tongue it wrote a new letter in its place.
Its task evidently completed, the rabbit nosed the book off the table, letting it fall roughly to the floor.  The rabbit leaped down to chair then floor, then smoothed and licked and patiently cleaned the book with tongue and fingers and fur.  Neighbors continued to gather, clogging room and doorways and both halls.  When the book-grooming was complete, the rabbit raised the book one shelf at a time with nose and fingers, returning it to its proper spot.  It leaped down again and hopped toward the east door.  People stepped aside to give it a clear path.  The rabbit exited our chamber and began to eat lettuces in the hall.
With firm voice, my father broke the general hush: "Children, you may gently pet the rabbit.  One child at a time."  He looked at me, but I no longer considered myself a child.  I waited for the neighbor children to have their fill of touching.  We lived about a hundred thousand levels from the summit, but even so impossibly near the top of our infinite world, one might reach old age only ever having seen a couple of dozen visitations.  By the time the last child left, the rabbit had long since finished eating.
The rabbit hopped toward where I sat, about twenty paces down the hall, near the spiral glass stairs.  I intercepted it, lifting it up and gazing into its eyes.
It gazed silently back, revealing no secrets.

[continued here]

[author interview]

-----------------------------------------

Related:

What Is the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape? (Jul 5, 2017)

Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct 31, 2012)

Wednesday, July 05, 2017

What's the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape?

Your first guess is probably not very likely.

But consider this argument:

(1) A computationalist-functionalist philosophy of mind is correct. That is, mentality is just a matter of transitioning between computationally definable states in response to environmental inputs, in a way that hypothetically could be implemented by a computer.

(2) As Alan Turing famously showed, it's possible to implement any finitely computable function on a strip of tape containing alphanumeric characters, given a read-write head that implements simple rules for writing and erasing characters and moving itself back and forth along the tape.

(3) Given 1 and 2, one way to implement a mind is by means of a rabbit reading and writing characters on a long strip of tape that is properly responsive, in an organized way, to its environment. (The rabbit will need to adhere to simple rules and may need to live a very long time, so it won't be exactly a normal rabbit. Environmental influence could be implemented by alteration of the characters on segments of the tape.)

(4) The universe is infinite.

(5) Given 3 and 4, the cardinality of "normally" implemented minds is the same as the cardinality of minds implemented by rabbits reading and writing on Turing tape. (Given that such Turing-rabbit minds are finitely probable, we can create a one-to-one mapping or bijection between Turing-rabbit minds and normally implemented minds, for example by starting at an arbitrary point in space and then pairing the closest normal mind with the closest Turing-rabbit mind, then pairing the second-closest of each, then pairing the third-closest....)

(6) Given 5, you cannot justifiably assume that most minds in the universe are "normal" minds rather than Turing-rabbit implemented minds. (This might seem unintuitive, but comparing infinities often yields such unintuitive results. ETA: One way out of this would be to look at the ratios in limits of sequences. But then we need to figure out a non-question-begging way to construct those sequences. See the helpful comments by Eric Steinhart on my public FB feed.)

(7) Given 6, you cannot justifiably assume that you yourself are very likely to be a normal mind rather than a Turing-rabbit mind. (If 1-3 are true, Turing-rabbit minds can be perfectly similar to normally implemented minds.)

I explore this possibility in "THE TURING MACHINES OF BABEL", a story in this month's issue of Apex Magazine. I'll link to the story once it's available online, but also consider supporting Apex by purchasing the issue now.

The conclusion is of course "crazy" in my technical sense of the term: It's highly contrary to common sense and we aren't epistemically compelled to believe it.

Among the ways out: You could reject the computational-functional theory of mind, or you could reject the infinitude of the universe (though these are both fairly common positions in philosophy and cosmology these days). Or you could reject my hypothesized rabbit implementation (maybe slowness is a problem even with perfect computational similarity). Or you could hold a view which allows a low ratio of Turing rabbits to normal minds despite the infinitude of both. Or you could insist that we (?) normally implemented minds have some epistemic access to our normality even if Turing-rabbit minds are perfectly similar and no less abundant. But none of those moves is entirely cost-free, philosophically.

Notice that this argument, though skeptical in a way, does not include any prima facie highly unlikely claims among its premises (such as that aliens envatted your brain last night or that there is a demon bent upon deceiving you). The premises are contentious, and there are various ways to resist my combination of them to draw the conclusion, but I hope that each element and move, considered individually, is broadly plausible on a fairly standard 21st-century academic worldview.

The basic idea is this: If minds can be implemented in strange ways, and if the universe is infinite, then there will be infinitely many strangely implemented minds alongside infinitely many "normally" implemented minds; and given standard rules for comparing infinities, it seems likely that these infinities will be of the same cardinality. In an infinite universe that contains infinitely many strangely implemented minds, it's unclear how you could know you are not among the strange ones.

Monday, June 26, 2017

Icelandic Thoughts

Here's where I'm sitting this minute: next to a creek on a steep flowery hill, overlooking the town of Neskaupstathur, Iceland, and its fjord, with snowy peaks and waterfalls in the distance.

[The creek and flowers are visible on the middle left, the snow as barely visible white flecks on the mountains across the bay, the buildings of the town as white smears by the water. As usual, an amateur photo hardly captures the immersive scene.]

I try to write at least one substantive post a week, even while traveling, but I'm finding it hard here -- partly because of the demands of travel, but also partly because my thoughts aren't very bloggish. My mind does often wander to philosophy, psychology, and speculative fiction while hiking (I'm considering a fairy story), but the thoughts seem softer and larger than my usual blogging style. The thoughts that come to me tend to be vague, drifting, uncertain thoughts about value and a meaningful life. I could imagine not needing to do academic philosophy again, if a different environment, like this one, brought different thoughts and values out of me.

Sitting by this creek in Iceland (and expecting internet connectivity!), is that a terrible wasteful indulgence in a world with so much poverty and need? Or is it a fine thing that I can reasonably let the world give me?

Tuesday, June 20, 2017

The Dauphin's Metaphysics, read by Tatiana Grey at Podcastle

My alternative-history story about love and low-tech body switching through hypnosis has just been released in audio at PodCastle. Terrific reading by Tatiana Grey!

PodCastle 475: The Dauphin's Metaphysics

This has been my best-received story so far, recommended by Locus Online, translated into Chinese and Hungarian for leading SF magazines in those languages, and assigned as required reading in at least two philosophy classes in the US.

The setting is Beijing circa 1700, post-European invasion and collapse, resulting in a mashup of European and Chinese institutions. Dauphin Jisun Fei takes a metaphysics class with the the Academy's star woman professor and conceives a plan for radical life extension.

Story originally published in Unlikely Story, fall 2015.

Thursday, June 15, 2017

On Not Distinguishing Too Finely Among One's Motivations

I'm working through Daniel Batson's latest book, What's Wrong with Morality?

Batson distinguishes between four different types of motives for seemingly moral behavior, each with a different type of ultimate goal. Batson's taxonomy is helpful -- but I want to push back against distinguishing as finely as he does among people's motives for doing good.

Suppose I offer a visiting speaker a ride to the airport. That seems like a nice thing to do. According to Batson, I might have one (or more) of the following types of motivation:

(1.) I might be egoistically motivated -- acting in my own perceived self-interest. Maybe the speaker is the editor of a prestigious journal and I think I'll have a better shot publishing and advancing my career if the speaker thinks well of me.

(2.) I might be altruistically motivated -- aiming primarily to benefit the speaker herself. I just want her to have a good visit, a good experience at UC Riverside, and giving her a ride is a way of advancing that goal I have.

(3.) I might be collectivistically motivated -- aiming primarily to benefit a group. I want UC Riverside's Philosophy Department to flourish, and giving the speaker a ride is a way of advancing that thing I care about.

(4.) I might be motivated by principle -- acting according to a moral standard, principle, or ideal. Maybe I think driving the speaker to the airport will maximize global utility, or that it is ethically required given my social role and past promises.

Batson characterizes his view of motivation as "Galilean" -- focused on the underlying forces that drive behavior (p. 25-26). The idea seems to be that when I make that offer to the visiting speaker, that action must have been induced by some particular motivational force inside me that is egoistic, altruistic, collectivist, or principled, or some specific combination of those. On this view, we don't understand why I am offering the ride until we know which of these interior forces is the one that caused me to offer the ride. Principled morality is rare, Batson argues, because it requires being caused to act by the fourth type of motivation, and people are more normally driven by the first three.

I'm nervous about appeals to internal causes of this sort. My best guess is that these sorts of simple, familiar folk (or quasi-folk) categories don't map neatly onto the real causal processes generating our behavior, which are likely to be much more complicated, and also misaligned with categories that come naturally to us. (Compare connectionist structures and deep learning.)

Rather than try to articulate an alternative positive account, which would be too much to add to this post, let me just suggest the following. It's plausible that our motivations are often a tangled mess, and when they are a tangled mess, attempting to distinguish finely among them is usually a mistake.

For example, there are probably hypothetical conditions under which I would decline to drive the speaker because it conflicted with my self-interest, and there are probably other hypothetical conditions under which I would set aside my self-interest and choose to drive the speaker anyway. I doubt these hypothetical conditions line up neatly, so that I decline to drive the speaker if and only if it would require sacrificing X amount or more of self-interest. Some situations might just channel me into driving her, even at substantial personal cost, while others might more easily invite the temptation to wiggle out.

The same is likely true for the other motivations. Hypothetically, if the situation were different so that it was less in the collective interest of the department, or less in the speaker's interest, or less compelled by my favorite moral principles, I might drive or not drive the speaker depending partly on each of these but also partly on other factors of situation and internal psychology, habits, scripts, potential embarrassment -- probably in no tidy pattern.

Furthermore, egoistic, altruistic, collectivist, and principled aims come in many varieties, difficult to disentangle. I might be egoistically invested in the collective flourishing of the department as a way of enhancing my own stature in the profession. I might be drawn to different, conflicting moral principles. I might altruistically desire both that the speaker get to her flight on time and that she enjoy the company of the cleverest conversationalist in the department (me!). I might enjoy showing off the sights of the L.A. basin through the windows of my car, with a feeling of civic pride. Etc.

Among all of these possible motivations -- indefinitely many possible motivations, perhaps, if we decide to slice finely among them -- does it make sense to try to determine which one or few are the real motivations that are genuinely causally responsible for my choosing to drive the speaker?

Now if my actual and hypothetical choices were all neatly aligned with my perceived self-interest, then of course self-interest would be my real motive. Similarly, if my pattern of actual and hypothetical choices were all neatly aligned with one particular moral principle, then we could say I was mainly moved by that principle. But if my patterns of choice are not so neatly explained, if my choices arise from a tangle of factors far more complex than Batson's four, then each of Batson's factors is only a simplified label for a pattern that I don't very closely match, rather than a deep Galilean cause of my choice.

The four factors might, then, not compete with each other as starkly as Batson seems to suppose. Each of them might, to a first approximation, capture my motivation reasonably well, in those fortunate cases where self-interest, other-interest, collective interest, and moral principle all tend to align. I have lots of reasons for driving the speaker! This might be so even if, in hypothetical cases, I diverge from the predicted patterns, probably in different and complex ways. My motivations might be described, with approximately equal accuracy, as egoistic, altruistic, collectivist, and principled, when these four factors tend to align across the relevant range of situations -- not because each type of motivation contributes equal causal juice to my behavior but rather because each attribution captures well enough the pattern of choices I would make in the types of cases we care about.

Wednesday, June 07, 2017

Academic Pyramids, Academic Tubes

Greetings from Cambridge! Traveling around Europe and the UK, I am struck by the extent to which different countries have relatively pyramid-like vs relatively tube-like academic systems. This has moved me to think, also, about the extent to which US academia has recently been becoming more pyramidal.

Please forgive my ugly sketch of a pyramid and a tube:

The German system is quite pyramidal: There is a small group of professors at the top, and many stages between undergraduate and professor, at any one of which you might suddenly find yourself ejected from the system: undergraduate, then masters, then PhD, then one or more postdocs and/or assistantships before moving up or out; and at each stage one needs to actively seek a position and typically move locations if successful.

In contrast, the US system, as it stood about twenty years ago, was more tubular: fewer transition stages requiring application and moving, with much sharper cutdowns between each stage. To a first approximation, undergraduates applied to PhD programs, very few got in, and then if they completed there was one more transition from completing the PhD to gaining a tenure-track job (and typically, though of course not always, tenure after 6-7 years on the tenure track).

Philosophy in the US is becoming more pyramidal, I believe, with more people pursuing terminal Master's degrees before applying to PhD programs, and with the increasing number of adjunct positions and postdoctoral positions for newly-minted PhDs. Instead of approximately three phases (undergrad, grad/PhD, tenure-track/tenured professor), we are moving closer to five-phase system (undergrad, MA, PhD, adjunct/post-doc, tenure-track/tenured).

This more pyramidal system has some important advantages. One advantage is that it provides more opportunities for people from nonelite backgrounds to advance through the system. It has always been difficult from students from nonelite undergraduate universities to gain acceptance to elite PhD programs (and it still is); similarly for students who struggled a bit in their undergraduate careers before finding philosophy. With the increasing willingness of PhD programs to accept students with Master's degrees, a broader range of students can earn a shot at academia: They can compete to get into a Master's program (typically easier to do for people with nonelite backgrounds than being admitted to a comparably-ranked PhD program) and then possibly shine there, gaining admittance to a range of PhD programs that would otherwise have been closed to them. A similar pattern sometimes occurs with postdocs.

The other advantage of the pyramid is that being exposed to a variety of institutions, advisors, and academic subcultures has advantages both for the variety of perspectives it provides and for meeting more people in the academic community. A Master's program or a postdoctoral fellowship can be a rewarding experience.

But I am also struck by the downside of pyramidal structures. In Europe, I met many excellent philosophers in their 30s or 40s, post-PhD, unsure whether they would make the next jump up the pyramid or not, unable to settle down securely into their careers. This used to be relatively uncommon in the US, though it has become more common. It is hard on marriages and families; and it's hard to face the prospects of a major career change in mid-life after devoting a dozen or more years to academia.

The sciences in the US have tended to be more pyramidal than philosophy, with one or more postdocs often expected before the tenure-track job. This is partly, I suspect, just due to the money available in science. There are lots of post-docs to be had, and it's easier to compete for professor positions with that extra postdoctoral experience. One possibly unintended consequence of the increased flow of money into philosophical research projects, through the Templeton Foundation and government research funding organizations, is to increase the number of postdocs, and thus the pyramidality of the discipline.

Of course, the rise of inexpensive adjunct labor is a big part of this -- bigger, probably, than the rise of terminal Master's programs as a gateway to the PhD and the rise of the philosophy post-doc -- but all of these contribute in different ways to making our discipline more pyramidal than it was a few decades ago.

Thursday, June 01, 2017

The Social-Role Defense of Robot Rights

Daniel Estrada's Made of Robots has launched a Hangout on Air series in philosophy of technology. The first episode is terrific!

Robot rights cheap yo.

Cheap: Estrada's argument for robot rights doesn't require that robots have any conscious experiences, any feelings, any reinforcement learning, or (maybe) any cognitive processing at all. Most other defenses of the moral status of robots assume, implicitly or explicitly, that robots who are proper targets of moral concern will exist only in the future, once they have cognitive features similar to humans or at least similar to non-human vertebrate animals.

In contrast, Estrada argues that robots already deserve rights -- actual robots that currently exist, even simple robots.

His core argument is this:

1. Some robots are already "social participants" deeply incorporated into our social order.

2. Such deeply incorporated social participants deserve social respect and substantial protections -- "rights" -- regardless of whether they are capable of interior mental states like joy and suffering.

Let's start with some comparison cases. Estrada mentions corpses and teddy bears. We normally treat corpses with a certain type of respect, even though we think they themselves aren't capable of states like joy and suffering. And there's something that seems at least a little creepy about abusing a teddy bear, even though it can't feel pain.

You could explain these reactions without thinking that corpses and teddy bears deserve rights. Maybe it's the person who existed in the past, whose corpse is now here, who has rights not to be mishandled after death. Or maybe the corpse's relatives and friends have the rights. Maybe what's creepy about abusing a teddy bear is what it says about the abuser, or maybe abusing a teddy harms the child whose bear it is.

All that is plausible, but another way of thinking emphasizes the social roles that corpses and teddy bears play and the importance to our social fabric (arguably) of our treating them in certain ways and not in other ways. Other comparisons might be: flags, classrooms, websites, parks, and historic buildings. Destroying or abusing such things is not morally neutral. Arguably, mistreating flags, classrooms, websites, parks, or historic buildings is a harm to society -- a harm that does not reduce to the harm of one or a few specific property owners who bear the relevant rights.

Arguably, the destruction of hitchBOT was like that. HitchBOT was cute ride-hitching robot, who made it across the length of Canada but who was destroyed by pranksters in Philadelphia when its creators sent it to repeat the feat in the U.S. Its destruction not only harmed its creators and owners, but also the social networks of hitchBOT enthusiasts who were following it and cheering it on.

It might seem overblown to say that a flag or a historic building deserves rights, even if it's true that flags and historic buildings in some sense deserve respect. If this is all there is to "robot rights", then we have a very thin notion of rights. Estrada isn't entirely explicit about it, but I think he wants more than that.

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots deserve remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Estrada gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

[image source] Update: I changed "have rights" to "deserve rights" in a few places above.

Thursday, May 25, 2017

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

At The Deviant Philosopher Wayne Riggs, Amy Olberding, Kelly Epley, and Seth Robertson are collecting suggestions for teaching units, exercises, and primers that incorporate philosophical approaches and philosophers that are not currently well-represented in the formal institutional structures of the discipline. The idea is to help philosophers who want suggestions for diversifying their curriculum. It looks like a useful resource!

I contributed the following to their site, and I hope that others who are interested in diversifying the philosophical curriculum will also contribute something to their project.

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

Primary Texts

  • Allen, James, Hilton Als, John Lewis, and Leon F. Litwack (2000). Without sanctuary: Lynching photography in America. Santa Fe: Twin Palms. Pp. 8-16, 173-176, 178-180, 184-185, 187-190, 194-196, 198, 201 (text only), and plates #20, 25, 31, 37-38, 54, 57, 62-65, 74, and 97.
  • Wells-Barnett, Ida B. (1892/2002). On lynchings. Ed. P.H. Collins. Amherst, NY: Humanity. Pp. 42-46.
  • Mengzi (3rd c. BCE/1970). Trans. B.W. Van Norden. Indianapolis: Hackett. 1A7, 1B5, 1B11, 2A2 (p. 35-41 only), 2A6, 2B9, 3A5, 4B12, 6A1 through 6A15, 6B1, 7A7, 7A15, 7A21, 7B24, 7B31.
  • Rousseau, Jean-Jacques (1755/1995). Discourse on the origin of inequality. Trans. F Philip. Ed. P. Coleman. Oxford: Oxford. Pp. 45-48.
  • Xunzi (3rd c. BCE/2014). Xunzi: The complete text. Trans. E. Hutton. Princeton, NJ: Princeton. Pp. 1-8, 248-257.
  • Hobbes, Thomas (1651/1996). Leviathan. Ed. R. Tuck. Cambridge: Cambridge. Pp. 86-90.
  • Doris, John M. (2002). Lack of character. Cambridge: Cambridge. Pp. 28-61.
  • The Milgram video on Obedience to Authority.
Secondary Texts for Instructor
  • Dray, Philip (2002). At the hands of persons unknown. New York: Modern Library.
  • Ivanhoe, Philip J. (2000). Confucian moral self cultivation, 2nd ed. Indianapolis: Hackett. 
  • Schwitzgebel, Eric (2007). Human nature and moral education in Mencius, Xunzi, Hobbes, and Rousseau. History of Philosophy Quarterly, 24, 147-168.
Suggested Courses
  • Introduction to Ethics
  • Ethics
  • Introduction to Philosophy
  • Evil
  • Philosophy of Psychology
  • Political Philosophy
Overview

This is a two-week unit. Day one is on the history of lynching in the United States, featuring lynching photography and Ida B. Wells. Day two is Mengzi on human nature (with Rousseau as secondary reading). Day three is Xunzi on human nature (with Hobbes as secondary reading). Days four and five are the Milgram video and John Doris on situationism.

The central question concerns the psychology of lynching perpetrators and Milgram participants. On a “human nature is good” view, we all have some natural sympathies or an innate moral compass that would be revolted by our participation in such activities, if we were not somehow swept along by bad influences (Mengzi, Rousseau). On a “human nature is bad” view, our natural inclinations are mostly self-serving and morality is an artificial human construction; so if one’s culture says “this is the thing to do” there is no inner source of resistance unless you have already been properly trained (Xunzi, Hobbes). Situationism (which is not inconsistent with either of these alternatives) suggests that most people can commit great evil or good depending on what seem to be fairly moderate situational pressures (Doris, Milgram).

Students should be alerted in advance about the possibly upsetting photographs, and they must be encouraged to look closely at the faces of the perpetrators rather than being too focused on the bodies of the victims (which may be edited out if desired for classroom presentation). You might even consider giving the students possible alternative readings if they find the lynching material too difficult (such as an uplifting chapter from Colby & Damon 1992).

On Day One, a point of emphasis should be that most of the victims were not even accused of capital crimes, and focus can be both on the history of lynching in general and on the emotional reactions of the perpetrators as revealed by their behavior described in the texts and by their faces in the photos.

On Day Two, the main emphasis should be on Mengzi’s view that human nature is good. King Xuan and the ox (1A7), the child at the well (2A6), and the beggar refusing food insultingly given (6A10) are the most vivid examples. The metaphor of cultivating sprouts is well worth extended attention (as discussed in the Ivanhoe and Schwitzgebel readings for the instructor). If the lynchers had paused to reflect in the right way, would they have found in themselves a natural revulsion against what they were doing, as Mengzi would predict? Rousseau’s view is similar (especially as developed in Emile) but puts more emphasis on the capacity of philosophical thinking to produce rationalizations of bad behavior.

On Day Three, the main emphasis should be on Xunzi’s view that human nature is bad. His metaphor of straightening a board is fruitfully contrasted with Mengzi’s of cultivating sprouts. For example, in straightening a board, the shape (the moral structure) is imposed by force from outside. In cultivating a sprout, the shape grows naturally from within, given a supportive, nutritive, non-damaging environment. Students can be invited to consider cartoon versions of “conservative” moral education (“here are the rules, like it or not, follow them or you’ll be punished!”) versus “liberal” moral education (“don’t you feel bad that you hurt Ana’s feelings?”).

Day Four you might just show the Milgram video.

Day Five the focus should be on articulating situationism vs dispositionism (or whatever you want to call the view that broad, stable, enduring character traits explain most of our moral behavior). I recommend highlighting the elements of truth in both views, and then showing how there are both situationist and dispositionist elements in both Mengzi and Xunzi (e.g., Mengzi says that young men are mostly cruel in times of famine, but he also recommends cultivating stable dispositions). Students can be encouraged to discuss how well or poorly the three different types of approach explain the lynchings and the Milgram results

If desired, Day Six and beyond can cover material on the Holocaust. Hannah Arendt’s Eichmann in Jerusalem and Daniel Goldhagen’s Hitler’s Willing Executioners make a good contrast (with Mengzian elements in Arendt and Xunzian elements in Goldhagen). (If you do use Goldhagen, be sure you are aware of the legitimate criticisms of some aspects of his view by Browning and others.)

Discussion Questions
  • What emotions are the lynchers feeling in the photographs?
  • If the lynchers had stopped to reflect on their actions, would they have been able to realize that what they were doing was morally wrong?
  • Mengzi lived in a time of great chaos and evil. Although he thought human nature was good, he never denied that people actually commit great evil. What resources are available in his view to explain actions like those of the lynch mobs, or other types of evil actions?
  • Is morality an artificial cultural invention? Or do we all have natural moral tendencies that only need to be cultivated in a nurturing environment?
  • In elementary school moral education, is it better to focus on enforcing rules that might not initially make sense to the children, or is it better to try to appeal to their sympathies and concerns for other people?
  • How effectively do you think people can predict what they themselves would do in a situation like the Milgram experiment or a lynch mob?
  • Are there people who are morally steadfast enough to resist even strong situational pressures? If so, how do they become like that?
Activities (optional)

On the first day, an in class assignment might be for them to spend 5-7 minutes writing down their opinion on whether human nature is good or evil (or in-between, or alternatively that the question doesn’t even make sense as formulated). Then can then trade their written notes with a neighbor or two and compare answers. On the last day, they can review what they wrote on the first day and discuss whether their opinions have changed.
[Greetings from Graz, Austria, by the way!]

Friday, May 19, 2017

Pre-Excuse

I'm heading off to Europe tomorrow for a series of talks and workshops. Nijmegen, Vienna, Graz, Lille, Leuven, Antwerp, Oxford, Cambridge -- whee! Then back to Riverside for a week and off to Iceland with the family to celebrate my son's high school graduation. Whee again! I return to sanity July 5.

I've sketched out a few ideas for blog posts, but nothing polished.

If I descend into incoherence, I have my pre-excuse ready! Jetlag and hotel insomnia.

[image source]

Thursday, May 18, 2017

Hint, Confirm, Remind

You can't say anything only once -- not when you're writing, not if you want the reader to remember. People won't read the words exactly as you intend them, or they will breeze over them; and often your words will admit of more interpretations than you realize, which you rule out by clarifying, angling in, repeating, filling out with examples, adding qualifiers, showing how what you say is different from some other thing it might be mistaken for.

I have long known this about academic writing. Some undergraduates struggle to fill their 1500-word papers because they think that every idea gets one sentence. How do you have eighty ideas?! It becomes much easier to fill the pages -- indeed the challenge shifts from filling the pages to staying concise -- once you recognize that every idea in an academic paper deserves a full academic-sized paragraph. Throw in an intro and conclusion and you've got, what, five ideas in a 1500-word paper? Background, a main point, one elaboration or application, one objection, a response -- done.

It took a while for me to learn that this is also true in writing fiction. You can't just say something once. My first stories were too dense. (They are now either trunked or substantially expanded.) I guess I implicitly figured that you say something, maybe in a clever oblique way, the reader gets it, and you're done with that thing. Who wants boring repetition and didacticism in fiction?

Without being didactically tiresome, there are lots of ways to slow things down so that the reader can relish your idea, your plot turn, your character's emotion or reaction, rather than having the thing over and done in a sentence. You can break it into phases; you can explicitly set it up, then deliver; you can repeat in different words (especially if the phrasings are lovely); you can show different aspects of the scene, relevant sensory detail, inner monologue, other characters' reactions, a symbolic event in the environment.

But one of my favorite techniques is hint, confirm, remind. You can do this in a compact way (as in the example I'm about to give), but writers more commonly spread HCRs throughout the story. Some early detail hints or foreshadows -- gives the reader a basis for guessing. Then later, when you hit it directly, the earlier hint is remembered (or if not, no biggie, not all readers are super careful), and the alert reader will enjoy seeing how the pieces come together. Still later, you remind the reader -- more quickly, like a final little hammer tap (and also so that the least alert readers finally get it).

Neil Gaiman is a master of the art. As I was preparing some thoughts for a fiction-writing workshop for philosophers I'm co-leading next month, I noticed this passage about "imposter syndrome", recently going around. Here's Gaiman:

Some years ago, I was lucky enough invited to a gathering of great and good people: artists and scientists, writers and discoverers of things. And I felt that at any moment they would realise that I didn’t qualify to be there, among these people who had really done things.

On my second or third night there, I was standing at the back of the hall, while a musical entertainment happened, and I started talking to a very nice, polite, elderly gentleman about several things, including our shared first name. And then he pointed to the hall of people, and said words to the effect of, "I just look at all these people, and I think, what the heck am I doing here? They’ve made amazing things. I just went where I was sent."

And I said, "Yes. But you were the first man on the moon. I think that counts for something."

And I felt a bit better. Because if Neil Armstrong felt like an imposter, maybe everyone did.

Hint: an elderly gentleman, same first name as Gaiman, famous enough to be backstage among well known artists and scientists. Went where he was sent.

Confirm: "You were the first man on the moon".

Remind: "... if Neil Armstrong..."

The hints set up the puzzle. It's unfolding fast before you, if you're reading at a normal pace. You could slow way down and treat it as a riddle, but few of us would do that.

The confirm gives you the answer. Now it all fits together. Bonus points to Gaiman for making it natural dialogue rather than flat-footed exposition.

The remind here is too soon after the confirm to really be a reminder, as it would be if it appeared a couple of pages later in a longer piece of writing. But the basic structure is the same: The remind hammer-taps the thing that should already be obvious, to make sure the reader really has it -- but quickly, with a light touch.

If you want the reader to remember, you can't just say it only once.

[image source]

Thursday, May 11, 2017

The Sucky and the Awesome

Here are some things that "suck":

  • bad sports teams;
  • bad popular music groups;
  • getting a flat tire, which you try to change in the rain because you're late to catch a plane for that vacation trip you've been planning all year, but the replacement tire is also flat, and you get covered in mud, miss the plane, miss the vacation, and catch a cold;
  • me, at playing Sonic the Hedgehog.
  • It's tempting to say that all bad things "suck". There probably is a legitimate usage of the term on which you can say of anything bad that it sucks; and yet I'm inclined to think that this broad usage is an extension from a narrower range of cases that are more central to the term's meaning.

    Here are some bad things that it doesn't seem quite as natural to describe as sucking:

  • a broken leg (though it might suck to break your leg and be laid up at home in pain);
  • lying about important things (though it might suck to have a boyfriend/girlfriend who regularly lies);
  • inferring not-Q from (i) P implies Q and (ii) not-P (though you might suck at logic problems);
  • the Holocaust.
  • The most paradigmatic examples of suckiness combine aesthetic failure with failure of skill or functioning. The sports team or the rock band, instead of showing awesome skill and thereby creating an awesome audience experience of musical or athletic splendor, can be counted on to drop the ball, hit the wrong note, make a jaw-droppingly stupid pass, choose a trite chord and tacky lyric. Things that happen to you can suck in a similar way to the way it sucks to be stuck at a truly horrible concert: Instead of having the awesome experience you might have hoped for, you have a lousy experience (getting splashed while trying to fix your tire, then missing your plane). There's a sense of waste, lost opportunity, distaste, displeasure, and things going badly. You're forced to experience one stupid, rotten thing after the next.

    Something sucks if (and only if) it should deliver good, worthwhile experiences or results, but it doesn't, instead wasting people's time, effort, and resources in an unpleasant and aesthetically distasteful way.

    The opposite of sucking is being awesome. Notice the etymological idea of "awe" in the "awesome": Something is awesome if it does or should produce awe and wonder at its greatness -- its great beauty, its great skill, the way everything fits elegantly together. The most truly sucky of sucky things instead, produces wonder at its badness. Wow, how could something be that pointless and awful! It's amazing!

    That "sucking" focuses our attention on the aesthetic and experiential is what makes it sound not quite right to say that the Holocaust sucked. In a sense, of course, the Holocaust did suck. But the phrasing trivializes it -- as though what is most worth comment is not the moral horror and the millions of deaths but rather the unpleasant experiences it produced.

    Similarly for other non-sucky bad things. What's central to their badness isn't aesthetic or experiential. To find nearby things that more paradigmatically suck, you have to shift to the experiential or to a lack of (awesome) skill or functioning.

    All of this is very important to understand as a philosopher, of course, because... because...

    Well, look. We wouldn't be using the word "sucks" so much if it wasn't important to us whether or not things suck, right? Why is it so important? What does it say about us, that we think so much in terms of what sucks and what is awesome?

    Here's a Google Ngram of "that sucks, this sucks, that's awesome". Notice the sharp rise that starts in the mid-1980s and appears to be continuing through the end of the available data.

    [click to enlarge]

    We seem to be more inclined than ever to divide the world into the sucky and the awesome.

    To see the world through the lens of sucking and awesomeness is to evaluate the world as one would evaluate a music video: in terms of its ability to entertain, and generate positive experiences, and wow with its beauty, magnificence, and amazing displays of skill.

    It's to think like Beavis and Butthead, or like the characters in the Lego Movie.

    That sounds like a superficial perspective on the world, but there's also something glorious about it. It's glorious that we have come so far -- that our lives are so secure that we expect them to be full of positive aesthetic experiences and maestro performances, so that we can dismissively say "that sucks!" when those high expectations aren't met.

    --------------------------------------

    For a quite different (but still awesome!) analysis of the sucky and the awesome, check out Nick Riggle's essay "How Being Awesome Became the Great Imperative of Our Time".

    Many thanks to my Facebook friends and followers for the awesome comments and examples on my public post about this last week.

    Wednesday, May 03, 2017

    On Trump's Restraint and Good Judgment (I Hope)

    Yesterday afternoon, I worked up the nerve to say the following to a room full of (mostly) white retirees in my politically middle-of-the-road home town of Riverside, California.

    (I said this after giving a slightly trimmed version of my Jan 29 L.A. Times op-ed What Happens to Democracy If the Experts Can't Be Both Factual and Balanced.)

    Our democracy requires substantial restraints on the power of the chief executive. The president cannot simply do whatever he wants. That's dictatorship.

    Dictatorship has arrived when other branches of government -- the legislature and the judiciary -- are unable to thwart the president. This can happen either because the other branches are populated with stooges or because the other branches reliably fail in their attempts to resist the president.

    President Trump appears to have expressed admiration for undemocratic chief executives who seize power away from judiciaries and legislatures.

    Here's something that could occur. President Trump might instruct the security apparatus of the United States -- the military, the border patrol, police departments -- to do something, for example to imprison or deport groups of people he describes as a threat. And then a judge or a group of judges might decide that Trump's instructions should not be implemented. And Trump might persist rather than deferring. He might insist that the judge or judges who aim to block him are misinterpreting or misusing the law. He might demand that his orders be implemented despite the judicial outcome.

    Here's one reason to think that won't occur: In January, Trump issued an executive order banning travel from seven majority-Muslim countries. When judges decided to block the order, Trump backed down. He insulted the judges and derided the decision, saying it left the nation less safe. But he did not demand that the security apparatus of the United States ignore the decision.

    So that's good.

    Probably Trump will continue to defer to the judiciary in that way. He has not been as aggressive about seizing power as he could have been, if he were set upon maximizing executive power.

    But if, improbably, Trump in the future decides to continue with an order that a judge is attempting to halt -- if, for some reason, Trump decides to insist that the executive branch disregard what he sees as an unwise and unjust judicial decision -- then quite suddenly our democracy would be comprised.

    Democracy depends on the improbable capacity of a few people who sit in courtrooms and study the law to convince large groups of people with guns to do things that those people with guns might not want to do, including things that the people with guns regard as contrary to the best interest of their country and the safety of their communities. It's quite amazing. A few people in black robes -- perhaps themselves with divided opinions -- versus the righteous desires of an army.

    If Trump says do this, and a judge in Hawaii says no, stop, and then Trump says army of mine, ignore that judge, what will the people with the guns do?

    It won’t happen. I don’t think it will happen.

    We as a country have chosen to wager our democracy on Trump's restraint and good judgment.

    [image source]

    Tuesday, May 02, 2017

    Is My Usage of "Crazy" Ableist?

    In 2014, I published a paper titled "The Crazyist Metaphysics of Mind". Since the beginning, I have been somewhat ambivalent about my use of the word "crazy".

    Some of my friends have expressed the concern that my use of "crazy" is ableist. I do agree that the use of "crazy" can be ableist -- for example, when it is used to insult or dismiss someone with a perceived psychological disability.

    I have a new book contract with MIT Press. The working title of the book is "How to Be a Crazy Philosopher". Some of my friends have urged me to reconsider the title.

    I disagree that the usage is ableist, but I am open to being convinced.

    I define a position as "crazy" just in case (1) it is highly contrary to common sense, and (2) we are not epistemically compelled to believe it. "Crazyism" about some domain is the view that something that meets conditions (1) and (2) must be true in that domain. I defend crazyism about the metaphysics of mind, and in some other areas. In these areas, something highly contrary to common sense must be true, but we are not in a good epistemic position to know which of the "crazy" possibilities is the true one. For example, panpsychism might be true, or the literal group consciousness of the United States, or the transcendental ideality of space, or....

    I believe that this usage is not ableist in part because (a) I am using the term with a positive valence, (b) I am not labeling individual people, and (c) the term is often used with a positive valence in our culture when it is not used to label people (e.g., "that's some crazy jazz!", "we had a crazy good time in Vegas"). I'm inclined to think that usages like those are typically morally permissible and not objectionably ableist.

    I welcome discussion, either in comments on this post or by email, if you have thoughts about this.

    Update: On my public post on Facebook, Daniel Estrada writes:

    I think the critical thing is to explicitly acknowledge and appreciate how the term "crazy" has been used to stigmatize and mystify issues around mental health. I don't think it's wrong to use any term, as long as you appreciate its history, and how your use contributes to that history. I think the overlap on "mystification" in your use is the extra prickly thorn in this nest. Contributing an essay (maybe just the preface?) where you address these complications explicitly seems like basic due diligence.

    I like that idea. If I keep the title and the usage, perhaps we can premise further discussion on the assumption that I do something like what Daniel has suggested.

    My Next Book...

    I've signed a contract with MIT Press for my next book. Working title: How to Be a Crazy Philosopher.

    The book will collect, revise, and to some extent integrate selected blog posts, op-eds, and longform journalism pieces, plus some new material. It will not be thematically unified around "crazyism" although of course it will include some of my material on that theme.

    Readers, if any my posts have struck you as especially memorable and worth including, I'd be interested to hear your opinion, either in the comments to this post or by email.

    -----------------------------------

    Some friends have expressed concerns about my use of "crazy" in the working title, since they view the usage as ableist. I am ambivalent about my use of the word, though I have been on the hook for it since at least 2014, when I published "The Crazyist Metaphysics of Mind". I will now create a separate post for discussion of that issue.

    Thursday, April 27, 2017

    The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot

    Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.

    Explicitly acknowledging such tradeoffs is unpleasant -- sufficiently unpleasant that it's tempting to try to rationalize them away. It's distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I'll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.

    Today I'll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you'll find these techniques useful too!

    The Happy Coincidence Defense. Consider travel for work. I don't have to travel around the world, giving talks and meeting people. It's not part of my job description. No one will fire me if I don't do it, and some of my colleagues do it considerably less than I do. On the face of it, I seem to be prioritizing my research career at the cost of being a somewhat less good father, teacher, and global moral citizen (given the luxurious use of resources and the pollution of air travel).

    The Happy Coincidence Defense says, no, in fact I am not sacrificing these other goals at all! Although I am away from my children, I am a better father for it. I am a role model of career success for them, and I can tell them stories about my travels. I have enriched my life, and then I can mingle that richness into theirs. I am a more globally aware, wiser father! Similarly, although I might cancel a class or two and de-prioritize my background reading and lecture preparation, since research travel improves me as a philosopher, it improves my teaching in the long run. And my philosophical work, isn't that an important contribution to society? Maybe it's important enough to morally justify the expense, pollution, and waste: I do more good for the world traveling around discussing philosophy than I could do leading a more modest lifestyle at home, donating more money to charities, and working within my own community.

    After enough reflection of this sort, it can come to seem that I am not making any tradeoffs at all among these four things I care intensely about. Instead, I am maximizing them all! This trip to England is the best thing I can do, all things considered, as a philosopher and as a father and as a teacher and as a citizen of the moral community. Yay!

    Now that might be true. If so, that would be a happy coincidence. Sometimes there really are such happy coincidences. But the pattern of reasoning is, I think you'll agree, suspicious. Life is full of tradeoffs among important things. One cannot, realistically, always avoid hard choices. Happy Coincidence reasoning has the odor of rationalization. It seems likely that I am illegitimately convincing oneself that something I want to be true really is true.

    The-Most-I-Can-Do Sweet Spot. Sometimes people try so hard at something that they end up doing worse as a result. For example, trying too hard to be a good father might make you in a father who is overbearing, who hovers too much, who doesn't give his children sufficient distance and independence. Teaching sometimes goes better when you don't overprepare. And sometimes, maybe, moral idealists push themselves so hard in pursuit of their ideals that they would have been better off pursuing a more moderate, sustainable course. For example, someone moved by the arguments for vegetarianism who immediately attempts the very strictest veganism might be more likely to revert to cheeseburger eating after a few months than someone who sets their sights a bit lower.

    The-Most-I-Can-Do Sweet Spot reasoning harnesses these ideas for convenient self-defense: Whatever I'm doing right now is the most I can realistically, sustainably do! Were I to try any harder to be a good father, I would end up being a worse father. Were I to spend any more time reading and writing philosophy than I actually do, I would only exhaust myself. If I gave any more to charity, or sacrificed any more for the well-being of others in my community, then I would... I would... I don't know, collapse from charity-fatigue? Or seethe so much with resentment at how more awesomely moral I am than everyone else that I'd be grumpy and end up doing some terrible thing?

    As with Happy Coincidence reasoning, The-Most-I-Can-Do Sweet Spot reasoning can sometimes be right. Sometimes you really are doing the most you can do about everything you care intensely about. But it would be kind of amazing if this were reliably the case. It wouldn't be that hard for me to be a somewhat better father, or to give somewhat more to my students -- with or without trading off other things. If I reliably think that wherever I happen to be in such matters, that's the Sweet Spot, I am probably rationalizing.

    Having cute names for these patterns of rationalization better helps me spot them as they are happening, I think -- both in myself and sometimes, I admit, somewhat uncharitably, also in others.

    Rather than think of something clever to say as the kicker for this post, I think I'll give my family a call.

    Friday, April 21, 2017

    Common Sense, Science Fiction, and Weird, Uncharitable History of Philosophy

    Philosophers have three broad methods for settling disputes: appeal to "common sense" or culturally common presuppositions, appeal to scientific evidence, and appeal to theoretical virtues like simplicity, coherence, fruitfulness, and pragmatic value. Some of the most interesting disputes are disputes in which all three of these broad methods are problematic and seemingly indecisive.

    One of my aims as a philosopher is to intervene on common sense. "Common sense" is inherently conservative. Common sense used to tell us that the Earth didn't move, that humans didn't descend from ape-like ancestors, that certain races were superior to others, that the world was created by a god or gods of one sort or another. Common sense is a product of biological and cultural evolution, plus the cognitive and social development of people in a limited range of environments. Common sense only has to get things right enough, for practical purposes, to help us manage the range of environments to which we are accustomed. Common sense is under no obligation to get it right about the early universe, the microstructure of matter, the history of the species, future technologies, or the consciousness of weird hypothetical systems we have never encountered.

    The conservativism and limited vision of common sense leads us to dismiss as "crazy" some philosophical and scientific views that might in fact be true. I've argued that this is especially so regarding theories of consciousness, about which something crazy must be true. For example: literal group consciousness, panpsychism, and/or the failure of pain to supervene locally. Although I don't believe that existing arguments decisively favor any of those possibilities, I do think that we ought to restrain our impulse to dismiss such views out of hand. Fit with common sense is one important factor in evaluating philosophical claims, especially when direct scientific evidence and considerations of general theoretical virtue are indecisive, but it is only one factor. We ought to be ready to accept that in some philosophical domains, our commonsense intuitions cannot be entirely preserved.

    Toward this end, I want to broaden our intuitive sense of the possible. The two best techniques I know are science fiction and cross-cultural philosophy.

    The philosophical value of science fiction consists not only in the potential of science fictional speculations to describe possible futures that we might actually encounter. Historically, science fiction has not been a great predictor of the future. The primary philosophical value of science fiction might rather consist in its ability to flex our minds and disrupt commonsense conservatism. After reading far-out stories about weird utopias, uploading into simulated realities, bizarrely constructed intelligent aliens, body switching, Matrioshka Brains, and alternative universes, philosophical speculations about panpsychism and group consciousness no longer seem quite so intolerably weird. At least that's my (empirically falsifiable) conjecture.

    Similarly, brain-flexing is an important part of the value of reading the history of philosophy -- especially work from traditions other than those with which you are already familiar. Here it's especially important not to be too "charitable" (i.e. assimilative). Relish the weirdness -- "weird" from your perspective! -- of radical Buddhist metaphysics, of medieval Chinese neo-Confucianism, of neo-Platonism in late antiquity, of 19th century Hegelianism and neo-Hegelianism.

    If something that seems crazy must be true about the metaphysics of consciousness, or about the nature of objects and causes, or about the nature of moral value -- as extended philosophical discussions of these topics suggest probably is the case -- then to evaluate the possibilities without excess conservatism, we need to get used to bending our minds out of their usual ruts.

    This is my new favorite excuse for reading Ted Chiang, cyberpunk, and Zhuangzi.

    [image source]

    Friday, April 14, 2017

    We Who Write Blogs Recommend... Blogs!

    Here's The 20% Statistician, Daniel Lakens, on why blogs have better science than Science.

    Lakens observes that blogs (usually) have open data, sources, and materials; open peer review; no eminence filter; easy error correction; and open access.

    I would add that blogs are designed to fit human cognitive capacities. To reach a broad audience, they are written to be broadly comprehensible -- and as it turns out, that's a good thing for science (and philosophy), since it reduces the tendency to hide behind jargon, technical obscurities, and dubious shared subdisciplinary assumptions. The length of a typical substantive blog post (500-1500 words) is also, I think, a good size for human cognition: long enough to have some meat and detail, but short enough that the reader can keep the entire argument in view. These features make blog posts much easier to critique, enabling better evaluation by specialists and non-specialists alike.

    Someone will soon point out, for public benefit, the one-sidedness of Lakens' and my arguments here.

    [HT Wesley Buckwalter]

    Sunday, April 09, 2017

    Does It Matter If the Passover Story Is Literally True?

    My opinion piece in today's LA Times.

    You probably already know the Passover story: How Moses asked Pharoah to let his enslaved people leave Egypt, and how Moses’ god punished Pharaoh — bringing about the death of the Egyptians’ firstborn sons even as he passed over Jewish households. You might even know the ancillary tale of the Passover orange. How much truth is there in these stories? At synagogues this time of year, myth collides with fact, tradition with changing values. Negotiating this collision is the puzzle of modern religion.

    Passover is a holiday of debate, reflection, and conversation. Last Passover, as my family and I and the rest of the congregation waited for the feast at our Reform Jewish temple, our rabbi prompted us: “Does it matter if the story of Passover isn’t literally true?”

    Most people seemed to shake their heads. No, it doesn’t matter.

    I was imagining the Egyptians’ sons. I am an outsider to the temple. My wife and teenage son are Jewish, but I am not. My 10-year-old daughter, adopted from China at age 1, describes herself as “half Jewish.”

    I nodded my head. Yes, it does matter if the Passover story is literally true.

    “Okay, Eric, why does it matter?” Rabbi Suzanne Singer handed me the microphone.

    I hadn’t planned to speak. “It matters,” I said, “because if the story is literally true, then a god who works miracles really exists. It matters if there is such a god or not. I don’t think I would like the moral character of that god, who kills innocent Egyptians. I’m glad there is no such god.”

    “It is odd,” I added, “that we have this holiday that celebrates the death of children, so contrary to our values now.”

    The microphone went around, others in the temple responding to me. Values change, they said. Ancient war sadly and necessarily involved the death of children. We’re really celebrating the struggle for freedom for everyone....

    Rabbi Singer asked if I had more to say in response. My son leaned toward me. “Dad, you don’t have anything more to say.” I took his cue and shut my mouth.

    Then the Seder plates arrived with the oranges on them.

    Seder plates have six labeled spots: two bitter herbs, charoset (fruit and nuts), parsley, a lamb bone, a boiled egg, each with symbolic value. There is no labeled spot for an orange.

    The first time I saw an orange on a Seder plate, I was told this story about it: A woman was studying to be a rabbi. An orthodox rabbi told her that a woman belongs on the bimah (pulpit) like an orange belongs on the Seder plate. When she became a rabbi, she put an orange on the plate.

    A wonderful story — a modern, liberal story. More comfortable than the original Passover story for a liberal Reform Judaism congregation like ours, proud of our woman rabbi. The orange is an act of defiance, a symbol of a new tradition that celebrates gender equality.

    Does it matter if it’s true?

    Here’s what actually happened. Dartmouth Jewish Studies professor Susannah Heschel was speaking to a Jewish group at Oberlin College in Ohio. The students had written a story in which a girl asks a rabbi if there is room for lesbians in Judaism, and the rabbi rises in anger, shouting, “There’s as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!” Heschel, inspired by the students but reluctant to put anything as unkosher as leavened bread on the Seder plate, used a tangerine instead.

    The orange, then, is not a wild act of defiance, but already a compromise and modification. The shouting rabbi is not an actual person but an imagined, simplified foe.

    It matters that it’s not true. From the two stories of the orange, we learn the central lesson of Reform Judaism: that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, changing as our values change — but also that only limited change is possible in a tradition-governed institution. An orange, but not a crust of bread.

    In a way, my daughter and I are also oranges: a new type of presence in a Jewish congregation, without a marked place, welcomed this year, unsure we belong, at risk of rolling off.

    In the car on the way home, my son scolded me: “How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously! You should have seen how they were looking at you, with so much anger. If you’d said more, they would practically have been ready to lynch you.”

    Due to the seating arrangement, I had been facing away from most of the congregation. I hadn’t seen those faces. Were they really so outraged? Was my son telling me the truth on the way home that night? Or was he creating a simplified myth of me?

    In belonging to an old religion, we honor values that are no longer entirely ours. We celebrate events that no longer quite make sense. We can’t change the basic tale of Passover. But we can add liberal commentary to better recognize Egyptian suffering, and we can add a new celebration of equality.

    Although the new celebration, the orange, is an unstable thing atop an older structure that resists change, we can work to ensure that it remains. It will remain only if we can speak the story of it compellingly enough to give our new values too the power of myth.

    -------------------------------------

    Revised and condensed from my blogpost Orange on the Seder Plate (Apr 27, 2016).

    Wednesday, April 05, 2017

    Only 4% of Editorial Board Members of Top-Ranked Anglophone Philosophy Journals Are from Non-Anglophone Countries

    If you're an academic aiming to reach a broad international audience, it is increasingly the case that you must publish in English. Philosophy is no exception. This trend gives native English speakers an academic advantage: They can more easily reach a broad international audience without having to write in a foreign language.

    A related question is the extent to which people who make their academic home in Anglophone countries control the English-language journals in which so much of our scholarly communication takes place. One could imagine the situation either way: Maybe the most influential academic journals in English are almost exclusively housed in Anglophone countries and have editorial boards almost exclusively composed of people in those same countries; or maybe English-language journals are a much more international affair, led by scholars from a diverse range of countries.

    To examine this question, I looked at the editorial boards of the top 15 ranked journals in Brian Leiter's 2013 poll of "top philosophy journals without regard to area". I noted the primary institution of every board member. (For methodological notes see the supplement at the end.)

    In all, 564 editorial board members were included in the analysis. Of these, 540 (96%) had their primary academic affiliation with an institution in an Anglophone country. Only 4% of editorial board members had their primary academic affiliation in a non-Anglophone country.

    The following Anglophone countries were represented:

    USA: 377 philosophers (67% of total)
    UK: 119 (21%)
    Australia: 26 (5%)
    Canada: 13 (2%)
    New Zealand: 5 (1%)

    The following non-Anglophone countries were represented:

    Germany: 6 (1%)
    Sweden: 5 (1%)
    Netherlands: 3 (1%)
    China (incl. Hong Kong): 2 (<1%)
    France: 2 (<1%)
    Belgium: 1 (<1%)
    Denmark: 1 (<1%)
    Finland: 1 (<1%)
    Israel: 1 (<1%)
    Singapore: 1 (<1%) [N.B.: English is one of four official languages]
    Spain: 1 (<1%)

    Worth noting: Synthese showed much more international participation than any of the other journals, with 13/31 (42%) of its editorial board from non-Anglophone countries.

    It seems to me that if English is to continue in its role as the de facto lingua franca of philosophy (ironic foreign-language use intended!), then the editorial boards of the most influential journals ought to reflect substantially more international participation than this.

    -------------------------------------------------

    Related Posts:

    How Often Do Mainstream Anglophone Philosophers Cite Non-Anglophone Sources? (Sep 8, 2016)

    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (Aug 14, 2014)

    -------------------------------------------------

    Methodological Notes:

    The 15 journals were Philosophical Review, Journal of Philosophy, Nous, Mind, Philosophy & Phenomenological Research, Ethics, Philosophical Studies, Australasian Journal of Philosophy, Philosopher's Imprint, Analysis, Philosophical Quarterly, Philosophy & Public Affairs, Philosophy of Science, British Journal for the Philosophy of Science, and Synthese. Some of these journals are "in house" or have a regional focus in their editorial boards. I did not exclude them on those grounds. It is relevant to the situation that the two top-ranked journals on this list are edited by the faculty at Cornell and Columbia respectively.

    I excluded editorial assistants and managers without without full-time permanent academic appointments (which are typically grad students or publishing or secretarial staff). I included editorial board members, managers, consultants, and staff with full-time permanent academic appointments, including emeritus.

    I used the institutional affiliation listed at the journal's "editorial board" website when that was available (even in a few cases where I knew the information to be no longer current), otherwise I used personal knowledge or a web search. In each case, I tried to determine the individual's primary institutional affiliation or most recent primary affiliation for emeritus professors. In a few cases where two institutions were about equally primary, I used the first-listed institution either on the journal's page or on a biographical or academic source page that ranked highly in a Google search for the philosopher.

    I am sure I have made some mistakes! I've made the raw data available here. I welcome corrections. However, I will only make corrections in accord with the method above. For example, it is not part of my method to update inaccurate affiliations on the journals' websites. Trying to do so would be unsystematic, disproportionately influenced by blog readers and people in my social circle.

    A few mistakes are inevitable in projects of this sort and shouldn't have a large impact on the general findings.

    -------------------------------------------------

    [image source]