Tuesday, November 29, 2016

How Everything You Do Might Have Huge Cosmic Significance

Infinitude is a strange and wonderful thing. It transforms the ridiculously improbable into the inevitable.

Now hang on to your hat and glasses. Today's line of reasoning is going to make mere Boltzmann continuants seem boring and mundane.

First, let's suppose that the universe is infinite. This is widely viewed as plausible (see Brian Greene and Max Tegmark).

Second, let's suppose that the Copernican Principle holds: We are not in any special position in the universe. This principle is also widely accepted.

Third, let's assume cosmic diversity: We aren't stuck in an infinitely looping variant of a mere (proper) subset of the possibilities. Across infinite spacetime, there's enough variety to run through every finitely specifiable possibility infinitely often.

These assumptions are somewhat orthodox. To get my argument going, we also need a few assumptions that are less orthodox, but I hope not wildly implausible.

Fourth, let's assume that complexity scales up infinitely. In other words, as you zoom out on the infinite cosmos, you don't find that things eventually look simpler as the scale of measurement gets bigger.

Fifth, let's assume that local actions on Earth have chaotic effects of an arbitrarily large magnitude. You know the Butterfly Effect from chaos theory -- the idea that a small perturbation in a complex, "chaotic" system can make a large-scale difference in the later evolution of the system. A butterfly flapping its wings in China could cause the weather in the U.S. weeks later to be different than it would have been if the butterfly hadn't flapped its wings. Small perturbations amplify. This fifth assumption is that there are cosmic-scale butterfly effects: far-distant, arbitrarily large future events that arise with chaotic sensitivity to events on Earth. Maybe new Big Bangs are triggered, or maybe (as envisioned by Boltzmann) given infinite time, arbitrarily large systems will emerge by chance from low-entropy "heat death" states, and however these Big Bangs or Boltzmannian eruptions arise, they are chaotically sensitive to initial conditions -- including the downstream effects of light reflected from Earth's surface.

Okay, that's a big assumption to swallow. But I don't think it's absurd. Let's just see where it takes us.

Sixth, given the right kind of complexity, evolutionary processes will transpire that favor intelligence. We would not expect such evolutionary processes at most spatiotemporal scales. However, given that complexity scales up infinitely (our fourth assumption) we should expect that at some finite proportion of spatiotemporal scales there are complex systems structured in a way that enables the evolution of intelligence.

From all this it seems to follow that what happens here on Earth -- including the specific choices you make, chaotically amplified as you flap your wings -- can have effects on a cosmic scale that influence the cognition of very large minds.

(Let me be clear that I mean very large minds. I don't mean galaxy-sized minds or visible-universe-sized minds. Galaxy-sized and visible-universe-sized structures in our region don't seem to be of the right sort to support the evolution of intelligence at those scales. I mean way, way up. We have infinitude to play with, after all. And presumably way, way slow if the speed of light is a constraint. Also, I am assuming that time and causation make sense at arbitrarily large scales, but maybe that can be weakened if necessary to something like contingency.)

Now at such scales anything little old you personally does would very likely be experienced as chance. Suppose for example that a cosmic mind utilizes the inflation of Big Bangs. Even if your butterfly effects cause a future Big Bang to happen this way rather than that way, probably a mind at that scale wouldn't have evolved to notice tiny-scale causes like you.

Far fetched. Cool, perhaps, depending on your taste in cool. Maybe not quite cosmic significance, though, if your decisions only feed a pseudo-random mega-process whose outcome has no meaningful relationship to the content of your decisions.

But we do have infinitude to play with, so we can add one more twist.

Here it is: If the odds of influencing the behavior of an arbitrarily large intelligent system are finite, and if we're letting ourselves scale up arbitrarily high, then (granting all the rest of the argument) your decisions will affect the behavior of an infinite number of huge, intelligent systems. Among them there will be some -- a tiny but finite proportion! -- such that the following counterfactual is true: If you hadn't made that upbeat, life-affirming choice you in fact just made, that huge, intelligent system would have decided that life wasn't worth living. But fortunately, partly as a result of that thing you just did, that giant intelligence -- let's call it Emily -- will discover happiness and learn to celebrate its existence. Emily might not know about you. Emily might think it's random or find some other aspect of the causal chain to point toward. But still, if you hadn't done that thing, Emily's life would have been much worse.

So, whew! I hope it won't seem presumptuous of me to thank you on Emily's behalf.

[image source]

Sunday, November 27, 2016

The Odds of Getting Three Consecutive Wars in a Row in the Card Game

What better way to spend the Sunday after Thanksgiving than playing card games with your family and then arguing about the odds?

As pictured, my daughter and I just got three consecutive "wars" in the card game of war. (I lost with a 3 at the end!)

What are the odds of that?

Well, the odds of getting just one war are 3/51, right? Here's why. It doesn't matter whether my or my daughter's card is turned first. That card can be anything. The second card needs to match it. With the first card out of the deck, 51 cards remain. Three of them match the first-turned card. So 3/51 = .058824 = about a 5.9% chance.

Then you each play three face down "soldier" cards. Those could be any cards, and we don't know anything about them, so they can be ignored for purposes of calculation. What's relevant are the next upturned cards, the "generals". Here there are two possibilities. First possibility: The first general is the same value as the original war cards. Since there are 50 unplayed cards and two that match the original two war cards, the odds of that are 2/50 = .040000 = 4.0%. The other possibility is that the value of the first general differs from that of the war cards: 48/50 = .960000 = 96.0%.

(As I write this, my son is sleeping late and my wife and daughter are playing with Musical.ly -- other excellent ways to spend a lazy Sunday!)

In the first case, the odds of the second general matching are only one in 49 (.020408, about 2.0%), since three of the four cards of that value have already been played and there are 49 cards left in the deck (disregarding the soldiers). In the second case, the odds are three in 49 (.061224, about 6.1%).

So the odds of two wars consecutively are: .058824 * .04 * .020408 (first war, followed by matching generals, i.e. all four up cards the same) + .058824 * .96 * .061124 (first war, followed by a different pair of matching generals) = .000048 + .003457 = .003505. In other words, there's about a 0.35% chance, or about one in 300 chance, of two consecutive wars.

If the second war had generals that matched the original war cards, then there's only one way for the third war to happen. Player one draws any new general. The odds of player two's new general matching are 3/47 (.063830).

If the second war had generals that did not match the original war cards, then there are two possibilities.

First possibility: The first new general is the same value as one of the original war cards or previous generals. There's a 4 in 48 (.083333) chance of that happening (two remaining cards of each of those two values). Finally, there's a 1/47 (.021277) chance that the last general matches this one (last remaining card of that value).

Second possibility: The first new general is a different value from either the original war cards or the previous generals. The odds of that are 44/48 (.916667), followed by a 3/47 (.063830) chance of match.

Okay, now we can total up the possibilities. There are three relevantly different ways to get three consecutive wars in a row.

A: First war, followed by second war with same values, followed by third war with different values: .058824 (first war) * .04000 (first general matches war cards) * .020408 (second general matches first general) * .063830 (odds of third war with fresh card values) = .000003 (.0003% or about 1 in 330,000).

B: First war, followed by second war with different values, followed by third war with same values as one of the previous wars: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .083333 (first new general matches either war cards or previous generals) * .021277 (second new general matches first new general) = .000006 (.0006% or about 1 in 160,000).

C: First war, followed by second and third wars, each with different values: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .916667 (first new general doesn't match either war cards or previous generals) * .063830 (second new general matches first new general) = .000202 (.02% or about 1 in 5000).

Summing up these three paths: .000003 + .000006 + .000202 = .000211. In other words, the chance of three wars in a row is 0.0211% or 1 in 4739.

Now for some leftover turkey.

-----------------------------------------------

As it happens we were playing the variant game Modern War -- which is much less tedious than the traditional card game of war! But since it was only the first campaign the odds are the same. (In later campaigns the odds of war increase, because smaller cards fall disproportionately out of the deck.)

Wednesday, November 23, 2016

The Moral Compass and the Liberal Ideal in Moral Education

Here are two very different approaches to moral education:

The outward-in approach. Inform the child what the rules are. Do not expect the child to like the rules or regard them as wise. Instead, enforce compliance through punishment and reward. Secondarily, explain the rules, with the hope that eventually the child will come to appreciate their wisdom, internalize them, and be willing to abide by them without threat of punishment.

The inward-out approach. When the child does something wrong, help the child see for herself what makes it wrong. Invite the child to reflect on what constitutes a good system of rules and what are good and bad ways to treat people, and collaborate in developing guidelines and ideals that make sense to the child. Trust that even young children can come to see the wisdom of moral guidelines and ideals. Punish only as a fallback when more collaborative approaches fail.

Though there need be no neat mapping, I conjecture that preference for the outward-in approach correlates with what we ordinarily regard as political conservativism and preference for the inward-out approach with what we ordinarily regard as political liberalism. The crucial difference between the two approaches is this: The outward-in approach trusts children's judgment less. On the outward-in approach, children should be taught to defer to established rules, even if those rules don't make sense to them. This resembles Burkean political conservativism among adults, which prioritizes respect for the functioning of our historically established traditions and institutions, mistrusting our current judgments about how to those institutions might be improved or replaced.

In contrast, the liberal ideal in moral education depends on the thought that most or all people -- including most or all children -- have something like an inner moral compass, which can be relied on as at least a partial, imperfect guide toward what's morally good. If you take four-year-old Pooja aside after she has punched Lauren (names randomly chosen) and patiently ask her to explain herself and to think about the ethics of punching, you will get something sensible in reply. For the liberal ideal to work, it must be true that Pooja can be brought to understand the importance of treating others kindly and fairly. It must be true that after reflection, she will usually find that she wants to be kind and fair to others, even without outer reward.

This is a lot to expect from children. And yet I do think that most children, when approached patiently, can find their moral compass. In my experience watching parents and educators, it strikes me that when they are at their best -- not overloaded with stress or too many students -- they can successfully use the inward-out approach. Empirical psychology also suggests that the (imperfect, undeveloped) seeds of morality are present early in development and shared among primates.

It is I think foundational to the liberal conception of the human condition -- "liberal" in rejecting the top-down imposition of values and celebrating instead people's discovery of their own values -- that when they are given a chance to reflect, in conditions of peace, with broad access to relevant information, people will tend to find themselves revolted by evil and attracted to good. Hatred and evil wither under thoughtful critical examination. So we liberals must believe. Despite complexities, bumps, regressions, and contrary forces, reflection and broad exposure to facts and arguments will bend us toward freedom, egalitarianism, and respect.

If this is so, here's something you can always do: Invite people to think alongside you. Share the knowledge you have. If there is light and insight in your thinking, people will slowly walk toward it.

Related essay: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly, 2007)

[image source]

Tuesday, November 15, 2016

Three Ways to Be Not Quite Free of Racism

Suppose that you can say, with a feeling of sincerity, "All races and colors of people deserve equal respect". Suppose also that when you think about American Blacks or South Asians or Middle Eastern Muslims you don't detect any feelings of antipathy, or at least any feelings of antipathy that you believe arise merely from consideration of their race. This is good! You are not an all-out racist in the 19th-century sense of that term.

Still, you might not be entirely free of racial prejudice, if we took a close look at your choices, emotions, passing thoughts, and swift intuitive judgments about people.

Imagine then the following ideal: Being free of all unjustified racial prejudice. We can imagine similar ideals for classism, ableism, sexism, ethnicity, subculture, physical appearance, etc.

It would be a rare person who met all of these ideals. Yet not all falling short is the same. The recent election has made vivid for me three importantly distinct ways in which one can fall short. I use racism as my example, but other failures of egalitarianism can be analyzed similarly.

Racism is an attitude. Attitudes can be thought of as postures of the mind. To have an attitude is to be disposed to act and react in attitude-typical ways. (The nature of attitudes is a central part of my philosophical research. For a fuller account of my view, see here.) Among the dispositions constitutive of all-out racism are: making racist claims, purposely avoiding people of that race, uttering racist epithets in inner speech, feeling negative emotions when interacting with that race, leaping quickly to negative conclusions about individual members of that race, preferring social policies that privilege your preferred race, etc.

An all-out racist would have most or all of these dispositions (barring "excusing conditions"). Someone completely free of racism would have none of these dispositions. Likely, the majority of people in our culture inhabit the middle.

But "the middle" isn't all the same. Here are three very different ways of occupying it.

(1.) Implicit racism. Some of the relevant dispositions are explicitly or overtly racist -- for example, asserting that people of the target race are inherently inferior. Other dispositions are only implicitly or covertly racist, for example, being prone without realizing it to evaluate job applications more negatively if the applicant is of the target race, or being likely to experience negative emotion upon being assigned a cooperative task with a person of the target race. Recent psychological research suggests that many people in our culture, even if they reject explicitly racist statements, are disposed to have some implicitly racist reactions, at least occasionally or in some situations. We can thus construct a portrait of the "implicit racist": Someone who sincerely disavows all racial prejudice, but who nonetheless has a wide-ranging and persistent tendency toward implicitly racist reactions and evaluations. Probably no one is a perfect exemplar of this portrait, with all and only implicitly racist reactions, but it is probably common for people to match it to a certain extent. To that extent, whatever it is, that person is not quite free of implicit racism.

Implicit racism has received so much attention in the recent psychological and philosophical literature that one might think that it is the only way to be not quite free of racism while disavowing racism in the 19th-century sense of the term. Not so!

(2.) Situational racism. Dispositions manifest only under certain conditions. Priscilla (name randomly chosen) is disposed sincerely to say, if asked, that people of all races deserve equal respect. Of course, she doesn't actually spend the entire day saying this. She is disposed to say it only under certain conditions -- conditions, perhaps, that assume the continued social disapproval of racism. It might also be the case that under other conditions she would say the opposite. A person might be disposed sincerely to reject racist statements in some contexts and sincerely to endorse them in other contexts. This is not the implicit/explicit division. I am assuming both sides are explicit. Nor am I imagining a change in opinion over time. I am imagining a person like this: If situation X arose she would be explicitly racist, while if situation Y arose she would be explicitly anti-racist, maybe even passionately, self-sacrificingly so. This is not as incoherent as it might seem. Or if it is incoherent, it is a commonly human type of incoherence. The history of racism suggests that perfectly nice, non-racist-seeming people can change on a dime with a change in situation, and then change back when the situation shifts again. For some people, all it might take is the election of a racist politician. For others, it might take a more toxically immersive racist environment, or a personal economic crisis, or a demanding authority, or a recent personal clash with someone of the target race.

(3.) Racism of indifference. Part of what prompted this post was an interview I heard with someone who denied being racist on the grounds that he didn't care what happened to Black people. This deprioritization of concern is in principle separable from both implicit racism and situational racism. For example: I don't think much about Iceland. My concerns, voting habits, thoughts, and interests instead mostly involve what I think will be good for me, my family, my community, my country, or the world in general. But I'm probably not much biased against Iceland. I have mostly positive associations with it (beautiful landscapes, high literacy, geothermal power). Assuming (contra Mozi) that we have much greater obligations to family and compatriots than to people in far-off lands, my habit of not highly prioritizing the welfare of people in Iceland probably doesn't deserve to labeled pejoratively with an "-ism". But a similar disregard or deprioritization of people in your own community or country, on grounds of their race, does deserve a pejorative label, independent any implicit or explicit hostility.

These three ways of being not quite free of racism are conceptually separable. Empirically, though, things are likely to be messy and cross-cutting. Probably the majority of people don't map neatly onto these categories, but have a complex set of mixed-up dispositions. Furthermore, this mixed-up set probably often includes both racist dispositions and, right alongside, dispositions to admire, love, and even make special sacrifices for people who are racialized in culturally disvalued ways.

It's probably difficult to know the extent to which you yourself fail, in one or more of these three ways, to be entirely free of racism (sexism, ableism, etc.). Implicitly racist dispositions are by their nature elusive. So also is knowledge of how you would react to substantial changes in circumstance. So also are the real grounds of our choices. One of the great lessons of the past several decades of social and cognitive psychology is that we know far less than we think we know about what drives our preferences and about the situational influences on our behavior.

I am particularly struck by the potentially huge reach of the bigotry of indifference. Action is always a package deal. There are always pros and cons, which need to be weighed. You can't act toward one goal without simultaneously deprioritizing many other possible goals. Since it's difficult to know the basis of your prioritization of one thing over another, it is possible that the bigotry of indifference permeates a surprising number of your personal and political choices. Though you don't realize it, it might be the case that you would have felt more call to action had the welfare of a different group of people been at stake.

[image source Prabhu B Doss, creative commons]

Wednesday, November 09, 2016

Thought for the Day

What you believe is not what you say you believe. It is how you act.

What you desire is not what you say you desire. It is what you choose.

Who you are is how you live.

You know this about other people, but it is very difficult to know this about yourself.

--------------------------------------

Acting Contrary to Our Professed Beliefs (Pacific Philosophical Quarterly, 2010).

Knowing Your Own Beliefs (Canadian Journal of Philosophy, 2011).

A Dispositional Approach to the Attitudes (New Essays on Belief, 2013).

The Pragmatic Metaphysics of Belief (in draft)

Friday, November 04, 2016

Use of "Genius", "Strict", and "Sexy" in Teaching Evaluations, by Discipline and Gender of Professor

Interesting tool here, where you can search for terms in professors' teaching reviews, by discipline and gender.

The gender associations of "genius" with male professors are already fairly well known. Here's how they show in this database:

Apologies for the blurry picture. Click on it to make it clearer!

On the other hand, terms like "mean", "strict", and "unfair" tend to occur more commonly in reviews of female professors. Here's "strict":

How about "sexy"? You might imagine that going either way: Maybe female professors are more frequently rated by their looks. On the other hand, maybe it's "sexier" to be a professor if you're a man. Here how it turns out:

Update, 10:45.

I can't resist adding one more. "Favorite":

Wednesday, November 02, 2016

Introspecting an Attitude by Introspecting Its Conscious Face

In some of my published work, I have argued that:

(1.) Attitudes, such as belief and desire, are best understood as clusters of dispositions. For example, to believe that there is beer in the fridge is nothing more or less than to be disposed (all else being equal or normal) to go to the fridge if one wants a beer, to feel surprised if one were to open the fridge and find no beer, to conclude that the fridge isn't empty if that question becomes relevant, etc, etc. (See my essays here and here.)

And

(2.) Only conscious experiences are introspectible. I characterize introspection as "the dedication of central cognitive resources, or attention, to the task of arriving at a judgment about one's current, or very recently past, conscious experience, using or attempting to use some capacities that are unique to the first-person case... with the aim or intention that one's judgment reflect some relatively direct sensitivity to the target state" (2012, p. 42-43).

Now it also seems correct that (3.) dispositions, or clusters of dispositions, are not the same as conscious experiences. One can be disposed to have a certain conscious experience (e.g., disposed to experience a feeling of surprise if one were to see no beer), but dispositions and their manifestations are not metaphysically identical. Oscar can be disposed to experience surprise if he were to see an empty fridge, even if he never actually sees an empty fridge and so never actually experiences surprise.

From these three claims it follows that we cannot introspect attitudes such as belief and desire.

But it seems we can introspect them! Right now, I'm craving a sip of coffee. It seems like I am currently experiencing that desire in a directly introspectible way. Or suppose I'm thinking aloud, in inner speech, "X would be such a horrible president!" It seems like I can introspectively detect that belief, in all its passionate intensity, as it is occurs in my mind right now.

I don't want to deny this, exactly. Instead, let me define relatively strict versus permissive conceptions of the targets of introspection.

To warm up, consider a visual analogy: seeing an orange. There the orange is, on the table. You see it. But do you really see the whole orange? Speaking strictly, it might be better to say that you see the orange rind, or the part of the orange rind that is facing you, rather than the whole orange. Arguably, you infer or assume that it's not just an empty rind, that it has a backside, that it has a juicy interior -- and usually that's a safe enough assumption. It's reasonable to just say that you see the orange. In a relatively permissive sense, you see the whole orange; in a relatively strict sense you see only the facing part of the orange rind.

Another example: From my office window I see the fire burning downtown. Of course, I only see the smoke. Even if I were to see the flames, in the strictest sense perhaps the visible light emitted from flames is only a contingent manifestation of the combustion process that truly constitutes a fire. (Consider invisible methanol fires.) More permissively, I see the fire when I see the smoke. More strictly, I need to see the flames or maybe even (impossibly?) the combustion process itself.

Now consider psychological cases: In a relatively permissive sense, you see Sandra's anger. In a stricter sense, you see her scowling face. In a relatively permissive sense, you hear the shyness and social awkwardness in Shivani's voice. In a stricter sense you hear only her words and prosody.

To be clear: I do not mean to imply that a stricter understanding of the targets of perception is more accurate or better than a more permissive understanding. (Indeed, excessive strictness can collapse into absurdity: "No, officer, I didn't see the stop sign. Really, all I saw were patterns of light streaming through my vitreous humour!")

As anger can manifest in a scowl and as fire can manifest in smoke and visible flames, so also can attitudes manifest in conscious experience. The desire for coffee can manifest in a conscious experience that I would describe as an urge to take a sip; my attitude about X's candidacy can manifest in a momentary experience of inner speech. In such cases, we can say that the attitudes present a conscious face. If the conscious experience is distinctive enough to serve as an excellent sign of the real presence of the relevant dispositional structure constituting that attitude, then we can say that the attitude is (occurrently) conscious.

It is important to my view that the conscious face of an attitude is not tantamount to the attitude itself, even if they normally co-occur. If you have the conscious experience but not the underlying suite of relevant dispositions, you do not actually have the attitude. (Let's bracket the question of whether such cases are realistically psychologically possible.) Similarly, a scowl is not anger, smoke is not a fire, a rind is not an orange.

Speaking relatively permissively, then, one can introspect an attitude by introspecting its conscious face, much as I can see a whole orange by seeing the facing part of its rind and I can see a fire by seeing its smoke. I rely upon the fact that the conscious experience wouldn't be there unless the whole dispositional structure were there. If that reliance is justified and the attitude is really there, distinctively manifesting in that conscious experience, then I have successfully introspected it. The exact metaphysical relationship between the strictly conceived target and the permissively conceived target is different among the various cases -- part-whole for the orange, cause-effect for the fire, and disposition-manifestation for the attitude -- but the general strategy is the same.

[image source]

Thursday, October 27, 2016

Dispositionalism vs Representationalism about Belief

The Monday before last, Ned Block and Eric Mandelbaum brought me into their philosophy grad seminar at New York University to talk about belief. Our views are pretty far apart, and I got pushback during class (and before class, and after class!) from a variety of directions. But the issue that stuck with me most was the big-picture issue of dispositionalism vs respresentationalism about belief.

I'm a dispositionalist. By this I mean that to believe some particular proposition, such as that your daughter is at school, is nothing more or less than to be disposed toward certain patterns of behavior, conscious experience, and cognition, under a range of hypothetical conditions -- for example, to be disposed to go to your daughter's school if you decide you want to meet her, to be disposed to feel surprise should you head home for lunch and find her waiting there, and to be disposed, if the question arises, to infer that her favorite backpack is also probably at the school (since she usually takes it with her). All of these dispositions hold only "ceteris paribus" or "all else being equal" and one needn't have all of them to count as believing. (For more details about my version of dispositionalism in particular, see here.) Crucial to the dispositionalist approach (but not unique to it) is the idea that the implementational details don't matter -- or rather, they matter only derivatively. It doesn't matter if you've got a connectionist net in your head, or representations in the language of thought, or a billion little homonuculi whispering in thieves' cant, or an immaterial soul. As long as you have the right clusters of behavioral, experiential, and cognitive dispositions, robustly, across a suitably broad range of hypothetical circumstances, you believe.

On a representationalist view, implementation does matter. On a suitably modest view of what a "representation" is (I like Dretske's account), the human mind uses representations. For example, it's very plausible that neural activity in primary visual cortex is representational, if representations are states of a system that function to track or convey information about something else. (In primary visual cortex, patterns of excitation in groups of neurons function to indicate geometrical features in various parts of the visual field.) The representationalist about belief commits to a general picture of the mind as a manipulator of representations, and then characterizes believing as a matter of having the right sort of representations (e.g., one with the content "my daughter it school") stored or activated in the right type of functional role in the mind (for example, stored in memory and poised (if all goes well) to be activated in cognitive processing when you are asked, "where is your daughter now?").

I interpreted some of the pushback from Block, Mandelbaum, and their students as follows: "Look, the best cognitive science employs a representational model of the mind. So representations are real. Even you don't deny that. So if you want a truly scientific model of the mind instead of some vague dispositionalism that looks only at the effects or manifestations of real cognitive states, you should be a representationalist."

How is a dispositionalist to reply to this concern? I have three broad responses.

The Implementational Response. The most concessive response (short of saying, "oops, you're right!") is to deny that there is any serious conflict between the two positions by allowing that the way one gets to have the dispositional profile constitutive of belief might be by manipulating representations in just the manner that the representationalist supposes. The views can be happily married! You don't get to have the dispositional profile of a believer unless you already have right sort of representational architecture underneath; and once you have the right sort of representational architecture underneath, you thereby acquire the relevant dispositional profile. The views only diverge in marginal or hypothetical cases where representational architecture and dispositional profile come apart -- but maybe those cases don't matter too much.

However, I think that answer is too concessive, for a couple of reasons.

The Messiness Response. Here's a too-simple hypothetical representationalist architecture for belief. To believe that P (e.g., that my daughter is at school today) is to just to have a representation with the content P ("my daughter is at school today") stored somewhere in the mind, ready to be activated when it becomes relevant whether P is the case (e.g., I'm asked "where is your daughter now?"). One problem with this view is the problem of specifying the exact content. I believe that the my daughter is at school today. I also believe that my daughter is at JFK Elementary today. I also believe that my daughter is at JFK Elementary now. I also believe that Kate is at JFK Elementary now. I also believe that Kate is in Ms. Salinas' class today. This list could obviously be expanded considerably. Do I literally have all of these representations stored separately? Or is there only one representation stored, from which the others are swiftly derivable? If so, which one? How could we know? This puzzle invites us to reject the simplistic picture that believing P is a matter of having a stored representation with exactly the content P. But once we make this move, we open ourselves up to a certain kind of implementational messiness -- which is plausible anyway. As we have seen in the two best-developed areas of cognitive science -- the cognitive science of memory and the cognitive science of vision -- the underlying architectural stories tend to be highly complex and tend not to map neatly onto our folk psychological categories. Furthermore, viewed from an appropriately broad temporal perspective, scientific fashions come and go: We have this many memory systems, no we have this many; early visual processing is not much influenced by later processing, wait yes it is influenced, wait no it's not after all. Dynamical systems, connectionist networks, patterns of looping activation can all be understood in terms of language-like representations, or no they can't, or maybe map-like representations or sensorimotor representations are better. Given the messiness and uncertainty of cognitive science, it is premature to commit to a thoroughly representationalist picture. Maybe someday we'll have all this figured out well enough so that we can say "this architectural structure, this one, is what you have if you believe that your daughter is at school, we found it!" That would be exciting! That day, I abandon dispositionalism. Until then, I prefer to think of belief dispositionally rather than relying upon any particular architectural story, even as general an architectural story as representationalism.

The What-We-Care-About Response. Why, as philosophers, do we want an account of belief? Presumably, it's because we care about predicting and explaining our behavior and our patterns of experience. So let's suppose as much divergence as it's reasonable to suppose between patterns of experience and behavior and patterns of internal architecture. Maybe we discover an alien species that has outward behavior and inner experiences virtually identical to our own but implemented very differently in the underlying architecture. Or maybe we can imagine a human being whose actions and experiences, not only in her actual circumstances but also in a wide range of hypothetical circumstances, are just like that of someone who believes that P, but who lacks the usual underlying architecture. On an architecture-driven account, it seems that we have to deny that these aliens or this person believes what they seem to believe; on a dispositional account, we get to say that they do believe what they seem to believe. The latter seems preferable: If what we care about in an account of belief is patterns of behavior and experience, then it makes sense to build an account of belief that prioritizes those patterns of behavior and experience as the primary thing, and treats purely architectural considerations as secondary.

----------------------------------------------

Some related posts and papers:

A Phenomenal, Dispositional Account of Belief (Nous 2002).

Belief (Stanford Encyclopedia of Philosophy, 2006 revised 2015).

Mad Belief? (blog post, Nov. 5, 2008).

A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box (in Nottelmann, ed., New Essays on Belief, 2013).

Against Intellectualism About Belief (blog post, July 31, 2015)

The Pragmatic Metaphysics of Belief (essay in draft, October 2016).

Friday, October 21, 2016

Storytelling in Philosophy Class

One of my regular TAs, Chris McVey, uses a lot of storytelling in his teaching. About once a week, he'll spend ten minutes sharing a personal story from his life, relevant to the class material. He'll talk about a family crisis or about his time in the U.S. Navy, connecting it back to the readings from the class.

At last weekend's meeting of the Minorities And Philosophy group at Princeton, I was thinking about what teaching techniques philosophers might use to appeal to a broader diversity of students, and "storytime with Chris" came to mind. The more I think about it, the more I find to like about it.

Here are some thoughts.

* Students are hungry for stories, and rightly so. Philosophy class is usually abstract and impersonal, or when not abstract focused on toy examples or remote issues of public policy. A good story, especially one that is personally meaningful to the teacher, leaps out and captures attention. People in general love stories and are especially ready for them after long dry abstractions and policy discussions. So why not harness that? But furthermore, storytelling gives shape and flesh to the stick figures of philosophical abstraction. Most abstract principles only get their full meaning when we see how they play out in real cases. Kant might say "act on that maxim that you can will to be a universal law" or Mengzi might say "human nature is good" -- but what do such claims really amount to? Students rightly feel at sea unless they are pulled away from toy examples and into the complexity of real life. Although it's tempting to think that the real philosophical force is in the abstract principles and that storytelling is just needless frill and packaging, I think that the reverse might be closer to the truth: The heart of philosophy is in how we engage our minds when given real, messy examples, and the abstractions we derive from cases always partly miss the point.

* Personal stories vividly display the relevance of philosophy. Many -- maybe most -- students are understandably turned off by philosophy because it seems so remote from anything of practical value. What's the point, they wonder, in discussing Locke's view of primary and secondary qualities, or semi-comical far-fetched problems about runaway trolleys, or under what conditions you "know" something is a barn in Fake Barn Country? It takes a certain kind of beautiful, nerdy, impractical mind to love these questions for their own sake. Too much focus on such issues can mislead students into thinking that philosophy is irrelevant to their lives. However (I hope you'll agree), nothing is more relevant to our lives than philosophy. Every choice we make expresses our values. Every controversial opinion we form depends upon our general worldview and our implicit or explicit sense of what people or institutions or methods deserve our trust. Most students will understandably fail to see the connection between academic philosophy and the philosophy they personally live through their choices and opinions unless we vividly show how these are connected. Through storytelling, you model your struggle with Kant's hard line against lying, or with how far to trust purported scientific experts, or with your fading faith in an immaterial soul -- and students can see that philosophy is not just a Glass Bead Game.

* Personal stories shift the locus of academic capital. We might think of "academic capital" as the resources students bring to class which help them succeed. In philosophy class, important capital includes skill at reading and evaluating abstract arguments and, in class discussion, skill at working up passable pro and con arguments on the spot. Academic capital of this sort also includes knowledge of the philosophical tradition, comfort in a classroom environment, confidence that one knows how this game is played. These are terrific skills to have of course; and some students have more of them than others, or at least believe they do. Those students tend to dominate class discussion. If you tell a personally meaningful story, however, you can make a different set of skills and experiences suddenly important. Students who might have had similar stories from their own lives now have something unique to contribute. Students who are good at storytelling, students who have the social and emotional intelligence to evaluate what might have really happened in your family fight, students with cultural knowledge of the kind of situation you describe -- they now have some of the capital. And they might be a very different group from the ones who are so good at the argumentative pro-and-con. In my experience, good philosophical storytelling engages and draws out discussion from a larger and more diverse group of students than does abstract argument and toy example.

If philosophers were more serious about engaged, personal storytelling in class, we would I think have a different and broader range of students who loved our courses and appreciated the importance and interest of our discipline.

[image source]

Tuesday, October 11, 2016

My Daughter's Rented Eyes

(inspired by a conversation with Cory Doctorow about how a kid's high-tech rented eyes might be turned toward favored products in the cereal aisle)

At two million dollars outright, of course I couldn't afford to buy eyes for my four-year-old daughter Eva. So, like everyone else whose kids had been blinded by the GuGuBoo Toy company's defective dolls (may its executives rot in bankruptcy Hell), I rented the eyes. What else could I possibly do?

Unlike some parents, I actually read the Eye & Ear Company's contract. So I knew part of what we were in for. If we didn't make the monthly payments, her eyes would shut off. We agreed to binding arbitration. We agreed to a debt-priority clause, to financial liability for eye extraction, to automatic updates. We agreed that from time to time the saccade patterns of her eyes would be subtly adjusted so that her gaze would linger over advertisements from companies that partnered with Eye & Ear Co. We agreed that in the supermarket, Eva's eyes would be gently maneuvered toward the Froot Loops and the M&Ms.

When the updates came in, we always had the legal right to refuse them. We could, hypothetically, turn off Eva's eyes, then have them surgically removed and returned to Eye & Ear Co. Each new rental contract was thus technically voluntary.

When Eva was seven, the new updater threatened shutoff unless we transferred $1000 into a debit account. Her updated eyes contained new software to detect any copyrighted text or images she might see. Instead of buying copyrighted works in the usual way, we agreed to have a small fee deducted from our account for each work Eva viewed. Instead of paying $4.99 for the digital copy of a Dr Seuss book, Eye & Ear would deduct $0.50 each time she read the book. Video games might be free with ads, or $0.05 per play, or $0.10, or even $1.00. Since our finances were tight, we set up parental controls: Eva's eyes required parental permission for any charge over $0.99 or any cumulative charges over $5.00 in a day -- and of course they also blocked any "adult" material. Until we granted approval, blocked or unpaid material was blurred and indecipherable, even if she was just peeking over someone's shoulder at a book or walking past a television in a dentist's lobby.

When Eva was ten, the updater overlaid advertisements in her visual field. It helped keep the rental costs down. (We could have bought out of the ads for an extra $6,000 a year.) The ads never interfered much with Eva's vision -- they just kind of scrolled across the top of her visual field sometimes, Eva told us, or printed themselves onto clouds and the sides of buildings.

By the time Eva was thirteen, I'd finally risen to a managerial position at work, and we could afford the new luxury eyes for her. By adjusting the settings, Eva could see infrared at night. She could zoom in on distant objects. She could bug out her eyes and point them in different directions like some sort of weird animal, to take in a broader field of view. She could also take snapshots and later retrieve them with a subvocalization -- which gave her a great advantage at school over her normal-eyed and cheaper-eyed peers. Installed software could text-search through stored snapshots, solve mathematical equations, and pull relevant information from the internet. When teachers tried to ban such enhancements from the classroom, Eye & Ear fought back, arguing that the technology had become so integral to the children's lives that it couldn't be removed without disabling them. Eye & Ear refused to develop the technology to turn off the enhancement features, and no teacher could realistically prevent a kid from blinking and subvocalizing.

By the time Eva was seventeen it looked like she and the two other kids at her high school with luxury eye rentals would more or less have their choice among elite universities. I refused to believe the rumors about parents intentionally blinding their children so that they too could rent eyes.

When Eva turned twenty, all the updates -- not just the cheap ones -- required that you accept the "acceleration" technology. Companies contracted with Eye & Ear to privilege their messages and materials for faster visual processing. Pepsi paid a hundred million dollars a year so that users' eyes would prioritize resolving Pepsi cans and Pepsi symbols in the visual scene. Coca Cola cans and symbols were "deprioritized" and stayed blurry unless you focused on them for a few seconds. Loading stored images worked similarly. A remembered scene with a Pepsi bottle in it would load almost instantly. One with a Coke bottle would take longer and might start out fuzzy or fragmented.

Eye & Ear started to make glasses for the rest of us, which imitated some of the functions of the implants. Of course they were incredibly useful. Who wouldn't want to take snapshots, see in the dark, zoom into the distance, get internet search and tagging? We all rented whatever versions we could afford, signed the annual terms and conditions, received the updates. We wore them pretty much all day, even in the shower. The glasses beeped alarmingly whenever you took them off, unless you went through a complex shutdown sequence.

When the "Johnson for President" campaign bought an acceleration, the issue went all the way to the Supreme Court. Johnson's campaign had paid Eye & Ear to prioritize the perception of his face and deprioritize the perception of his opponent's face, prioritize the visual resolution and recall of his ads, deprioritize the resolution and recall of his opponent's ads. Eva was now a high-powered lawyer in a New York firm, on the fast track toward partner. She worked for the Johnson campaign, though I wouldn't have thought it was like her. Johnson was so authoritarian, shrill, and right-wing -- or at least it seemed so to me, when I took my glasses off.

Johnson favored immigration restrictions, and his opponent claimed (but never proved) that Eye & Ear implemented an algorithm that highlighted people's differences in skin tone -- making the lights a little lighter, the darks a little darker, the East Asians a bit yellow. Johnson won narrowly, before his opponent's suit about the acceleration had made it through the appeals process. It didn't hit the high court until a month after Johnson's inauguration. Eva helped prepare Johnson's defense. Eight of the nine justices were over eighty years old. They lived stretched lives with enhanced longevity and of course all the best implants. They heard the case through the very best ears.

---------------------------------

Related post:

Possible Cognitive and Cultural Effects of Video Lifelogging (Apr 21, 2016)

---------------------------------

Image source:

Photo: HAL 9000 resurrected by Ram Yoga, doubled.

Tuesday, October 04, 2016

French, German, Greek, Latin, but Not Arabic, Chinese, or Sanskrit?

[cross-posted at the Blog of the APA]

When I was graduate student in Berkeley in the 1990s, philosophy PhD students were required to pass exams in two of the following four languages: French, German, Greek, or Latin. I already knew German. I argued that Spanish should count (I had read Unamuno in the original as an undergrad), but my petition was denied since I didn’t plan to do further work in Spanish. I argued that a psychological methods course would be more useful than a second foreign language, given that my dissertation was in philosophy of psychology, but that was not treated as a serious suggestion. I'd learned some classical Chinese, but I thought it would be pointless to attempt 600 characters in two hours as required (much more daunting than 600 words in a European language). So I crammed French for a few weeks and passed the exam.

I have recently become interested in mainstream Anglophone philosophers’ tendency to privilege certain languages and traditions in the history of philosophy. If we think globally, considering large, robust traditions of written work treating recognizably philosophical topics with argumentative sophistication and scholarly detail, it seems clear that at least Arabic, classical Chinese, and Sanskrit merit inclusion alongside French, German, Greek, and Latin as languages of major philosophical importance.

The exclusion of Arabic, Chinese, and Sanskrit from Berkeley’s standard language requirements could not, I think, have been mere ignorance. Rather, the focus on French, German, Greek, and Latin appeared to express a value judgment: that these four languages are more central to philosophy as it ought to be studied.

The language requirements of philosophy PhD programs have loosened over the years, but French, German, Latin, and Greek still form the core language requirements in departments that have language requirements. Students therefore continue to receive the message that these languages are the most important ones for philosophers to know.

I examined the language requirements of a sample of PhD programs in the United States. Because of their sociological importance in the discipline, I started with the top twelve ranked programs in the Philosophical Gourmet Report. I then expanded the sample by considering a group of strong PhD programs that are not as sociologically central to the discipline – the programs ranked 40-50 in the U.S.

Among the top twelve programs (corrections welcome):

* Four appeared to have no foreign language requirement (Michigan, NYU, Rutgers, Stanford).

* Seven (Berkeley, Columbia, Harvard, Pitt, UCLA, USC, Yale) had some version of a language requirement, requiring one of French, German, Greek or Latin -- always exactly that list. Some programs explicitly allowed another language and/or another relevant research skill by petition or consultation.

* Only Princeton had a language requirement that did not appear to privilege French, German, Greek, and Latin. Princeton only requires a language “relevant to the student’s proposed course of study” (or alternatively “a unit of advanced work in another department” or “completion of an additional unit of work in any area of philosophy”).

You might think that, practically speaking, Arabic or classical Chinese would be a fine language to choose. Students can always petition; maybe such petitions are almost always granted. This response, however, ignores the fact that something is communicated by other languages’ non-inclusion on the privileged list. For a tendentious comparison – maybe too tendentious! – consider an admissions form that said “we admit men, but also women by petition”. One thing is treated as a norm and the other as an exception.

Interestingly, the PhD programs ranked less highly by the Philosophical Gourmet had more relaxed language requirements overall. In the 40-50 group, only two of the eleven mentioned a language requirement or list of languages. Still, the privileged languages were from the same set: “French, German, or other” at Saint Louis University, and optional certification in French, German, Greek, or Latin at Rochester.

I do not believe that we should be sending students the message that French, German, Greek, and Latin are more important than other languages in which there is a body of interesting philosophical work. It is too Eurocentric a vision of the history of philosophy. Let’s change this.

------------------------------------

Related Op-Eds:

What’s missing in college philosophy classes? Chinese philosophers (Schwitzgebel, Los Angeles Times, Sep 11, 2015)

If philosophy won’t diversify, let’s call it what it really is (Garfield and Van Norden, New York Times, May 11, 2016)

And on the opposite side:

Not all things wise and good are philosophy (Tampio, Aeon, Sep 13, 2016)

The image is, of course, from the Epic Rap Battle, Eastern vs Western Philosophers!

Wednesday, September 28, 2016

New Essay in Draft: The Pragmatic Metaphysics of Belief

Available here.

As always, comments and criticisms welcome, either by email to my address or in the comments section on this post.

Abstract:

Suppose someone intellectually assents to a proposition but fails to act and react generally as though that proposition is true. Does she believe the proposition? Intellectualist approaches will say she does believe it. They align belief with sincere, reflective judgment, downplaying the importance of habitual, spontaneous reaction and unreflective assumption. Broad-based approaches, which do not privilege the intellectual and reflective over the spontaneous and habitual in matters of belief, will refrain from ascribing belief or treat it as an intermediate case. Both views are viable, so it is open to us to choose which view to prefer on pragmatic grounds. I argue that since “belief” is a term of central importance in philosophy of mind, philosophy of action, and epistemology, we should use it to label most important phenomenon in the vicinity that can plausibly answer to it. The most important phenomenon in the vicinity is not our patterns of intellectual endorsement but rather our overall lived patterns of action and reaction. Too intellectualist a view risks hiding the importance of lived behavior, especially when that behavior does not match our ideals and self-conception, inviting us to noxiously comfortable views of ourselves.

The Pragmatic Metaphysics of Belief (in draft)

(I'll be giving a version of this paper as talk at USC on Friday, by the way.)

Related Posts:

On Being Blameworthy for Unwelcome Thoughts, Reactions, and Biases (Mar 19, 2015)

Against Intellectualism about Belief (Jul 31, 2015)

Pragmatic Metaphysics (Feb 11, 2016)

Monday, September 26, 2016

Cory Doctorow Speaking at UC Riverside: "1998 Called, and It Wants Its Stupid Internet Laws Back"

Come one, come all! (Well, for certain smallish values of "all".)

Cory Doctorow

"1998 Called, and It Wants Its Stupid Internet Laws Back"

Wednesday, September 28, 2016
3:10-5:00
INTS 1113

The topic will be digital rights management and companies' increasing tendency not to give us full control over the devices that matter to us, so that the the devices can "legitimately" (?) thwart us when we give them orders contrary to the manufacturers' interests.

The Jerk Quiz: New York City Edition

Now that my Jerk Quiz has been picked up by The Sun and The Daily Mail, I've finally hit the big time! I'm definitely listing these as "reprints" on my c.v.

Philosopher James DiGiovanna suggested to me that the existing Jerk Quiz might not be valid in New York City, so I suggested he draw up a NYC version. Here's the result!

New York City Jerk Test

by James DiGiovanna

1. You have a fifteen-minute break from work, a desperate need for a cigarette, and a seven-minute-each-way walk to the bank on a very crowded sidewalk. Do you:
(a) Calmly walk the 14-minute round-trip handling the cigarette cravings by reminding yourself that you only have a scant 7 more hours of work, a 49-minute commute on the crowded and probably non-functional F train, and then a brief walk through throngs of NYU students before you can reach your undersized apartment for a pleasant 4 minutes of smoking.
(b) Curse the existence of each probably mindless drone who stands between you and your goal.
(c) Find a narrow space just off the main thoroughfare and enjoy 5 quick drags meant to burn your entire cigarette down to the filter in under 30 seconds.
(d) Light up a cigarette as you walk, unconsciously assuming that others can dodge the flaming end and/or enjoy the smoking effluvia as they see fit, if indeed they have minds that can see anything at all.

2. You are waiting at the bodega to buy one measly cup of coffee, one of the few pleasures allowed to you in a world where the last tree is dying somewhere in what was probably a forest before Reagan was elected. However, there is a long line, including someone directly in front of you who is preparing to write a check in spite of the fact that this is the 21st century. You accidentally step on this person’s toe, causing him or her to move to the side yelping in pain. Do you:
(a) Apologize profusely.
(b) Offer the standard, “pardon me!” while wondering why check-writers were allowed to reproduce and create check-writing offspring at this late point in history.
(c) Say nothing, holding your precious place in line against the unhygienic swarm of lower lifeforms.
(d) Consider this foe vanquished and proceed to take his or her place as you march relentlessly towards the cashier.

3. You are in hell (midtown near Times Square) where an Eastern Hemisphere tourist unknowingly drops a wallet, and an elderly woman wanders out in front of a runaway hot dog stand, risking severe cholesterol and death. Do you:
(a) Shout to the Foreign Person while rushing to rescue the elderly woman.
(b) Ignore the neocolonialist tourist and his or her justifiable loss of money earned by exploiting the third world and attempt to save the woman because, my God, that could be you and/or your non-gender-specific life partner someday.
(c) Continue on your way because you have things to do.
(d) Yell so that others will see that there is a woman about to be hotdog-carted, assuming this will distract the crowd from the dropped wallet, making it easier for you to take it and run.

4. You have been waiting for the A train for 300 New York Minutes (i.e. five minutes in flyover state time.) Finally, it arrives, far too crowded to accept even a single additional passenger. Do you:
(a) Step out of the way so others can exit, and allow those on the platform in front of you to enter the train, and then, if and only if there is ample room to enter without compressing other persons, do you board the train.
(b) Wait calmly, because when his happens, 9 times out of 10 an empty train is 1 minute behind.
(c) Mindlessly join the throngs of demi-humans desperately hoping to push their way into the car.
(d) Slide along the outside of the car to the spot just adjacent the door, then slip in the narrow space made when a person who is clearly intending to get back in the car stepped off to make way for someone who was disembarking to pass.

5. It is a typical winter day in New York, meaning at the end of each sidewalk is a semi-frozen slush puddle of indeterminate depth. Perhaps it is barely deep enough to wet your boots, perhaps it drains directly into a C.H.U.D. settlement. You see a family, the father carrying a map and wearing a fanny pack, the mother holding a guide which say “Fodors New York för Nordmen,” the blindingly white children staring for the first time at buildings that are not part of a system of social welfare and frost. They absentlly march towards the end of the sidewalk, eyes raised towards New York’s imposing architecture, about to step into what could be their final ice bath. Do you:
(a) Yell at them to stop while you check the depth of the puddle for them.
(b) Block their passage and point to a shallower point of egress.
(c) Watch in amusement as they test the puddle depth for you.
(d) Push them into the puddle and use their frozen bodies as a bridge to freedom.

-----------------------------------------

(I interpret James's quiz as a commentary on how difficult it is, even for characterological non-jerks, to avoid jerk-like behaviors or thoughts in that kind of urban context.)

For more on Jerks see:

A Theory of Jerks

How to Tell If You're A Jerk

Friday, September 23, 2016

Call for Abstracts: Workshop in Graz on Dissonance and Implicit Bias

I'll be presenting at the following workshop. There's a call for abstracts. Submit something and let's chat!

4th Fragmentation Workshop: Dissonance and Implicit Bias

Graz, 25-26 May 2017

The 4th Fragmentation Workshop: Dissonance and Implicit Bias is organized by the research project The Fragmented Mind and will take place at the University of Graz, Austria, on May 25-26, 2017. We welcome submissions of anonymized abstracts of 500–1000 words for 45 minutes presentations on any of the workshop topics — see below — made by December 15, 2016 at fragmentationprojectgraz@gmail.com.

Keynote speakers:
• Eric Schwitzgebel (UC Riverside)
• Jules Holroyd (Sheffield)

It is highly disputed what the psychological underpinnings of assertion-behavior dissonance and implicit bias are. Under some interpretations, these are special cases of belief fragmentation, i.e., the view that a single agent has various separate systems of belief, which need not make for a consistent and deductively closed overall system. Under other interpretations, dissonance does not represent a state of fragmentation nor does implicit bias involve the presence of conflicting beliefs.

The objective of this workshop is to explore the adequacy and limitations of the notion of fragmentation (as advanced, for example, by Davidson, Lewis, Stalnaker, and Rayo), when applied to cases of dissonance and implicit bias.

(Non-exhaustive) list of topics:
• What are the psychological underpinnings of assertion-behavior dissonance?
• What are the psychological underpinnings of implicit bias?
• Are dissonance and implicit bias overlapping phenomena?
• Does fragmentation help explaining cases of assertion-behavior dissonance?
• Does fragmentation help explaining implicit bias?

Submission format:
Submissions of anonymous abstracts of 500-1000 words (excluding bibliography), prepared for anonymous peer-review, should be sent to fragmentationproject@gmail.com by December 15, 2016. Abstracts should be submitted in pdf format, in English.

Authors will be notified of decisions by January 31, 2017. Please indicate the title of your paper in your email.

Some support for travel and accommodation might be available.

Organizers: Cristina Borgoni, Dirk Kindermann, Andrea Onofri

Contact: fragmentationprojectgraz@gmail.com

Thursday, September 22, 2016

The Jerk Quiz

Take this simple quiz to figure out if you're a jerk!

(George Musser and the folks at Nautilus thought it would be fun to have a quiz alongside my essay "How To Tell If You're a Jerk", but we didn't quite pull it off before release of the issue.)

The Jerk Quiz

1. You're waiting in a line at the pharmacy. What are you thinking?
(a) Did I forget anything on my shopping list?
(b) Should I get ibuprofen or acetaminophen? I never can keep them straight.
(c) Oh no, I'm so sorry, I didn’t mean to bump you.
(d) These people are so damned incompetent! Why do I have to waste my time with these fools?

2. At the staff meeting, Peter says that your proposal probably won't work. You think:
(a) Hm, good point but I bet I could fix that.
(b) Oh, Loretta is smiling at Peter again. I guess she agrees with him and not me, darn it. But I still think my proposal is probably better than his.
(c) Shoot, Peter's right. I should have thought of that!
(d) Peter the big flaming ass. He's playing for the raise. And all the other idiots here are just eating it up!

3. You see a thirty-year-old guy walking down the street with steampunk goggles, pink hair, dirty sneakers, and badly applied red lipstick. You think:
(a) Different strokes for different folks!
(b) Hey, is that a new donut shop on the corner?
(c) I wish I were that brave. I bet he knows how to have fun.
(d) Get a job already. And at least learn how to apply the frickin lipstick.

4. At a stop sign, a pedestrian is crossing slowly in front of your car. You think:
(a) Wow, this tune on my radio has a fun little beat!
(b) My boss will have my hide if I'm late again. Why did I hit snooze three times?
(c) She looks like she's seen a few hard knocks. I bet she has a story or two to tell.
(d) Can't this bozo walk any faster? What a lazy slob!

5. The server at the restaurant forgets that you ordered the hamburger with chili. There's the burger on the table before you, with no chili. You think:
(a) Whatever. I'll get the chili next time. Fewer calories anyway.
(b) Shoot, no chili. I really love chili on a burger! Argh, let's get this fixed. I'm hungry!
(c) Wow, how crowded this place is. She looks totally slammed. I'll try catch her to fix the order next time she swings by.
(d) You know, there's a reason that people like her are stuck in loser jobs like this. If I was running this place I'd fire her so fast you'd hear the sonic boom two miles down the street.

How many times did you answer (d)?

0: Sorry, I don't believe you.

1-2: Yeah, fair enough. Same with the rest of us.

3-4: Ouch. Is this really how you see things most of the time? I hope you're just being too hard on yourself.

5: Yes, you are being too hard on yourself. Either that, or please step forward for the true-blue jerk gold medal!

(As my scoring system suggests, this quiz is for entertainment and illustration purposes only. I don't take it seriously as a diagnostic measure!)

Wednesday, September 21, 2016

How to Tell If You're a Jerk

[excerpted from my new essay in Nautilus]

Here’s something you probably didn’t do this morning: Look in the mirror and ask, am I a jerk?

It seems like a reasonable question. There are, presumably, genuine jerks in the world. And many of those jerks, presumably, have a pretty high moral opinion of themselves, or at least a moderate opinion of themselves. They don’t think of themselves as jerks, because jerk self-knowledge is hard to come by.

Psychologist Simine Vazire at the University of California, Davis argues that we tend to have good self-knowledge of our own traits when those traits are both evaluatively neutral (in the sense that it’s not especially good or bad to have those traits), and straightforwardly observable.

For example, people tend to know whether they are talkative. It’s more or less okay to be talkative and more or less okay to be quiet, and in any case your degree of talkativeness is pretty much out there for everyone to see. Self-ratings of talkativeness tend to correlate fairly well with peer ratings and objective measures. Creativity, on the other hand, is a much more evaluatively loaded trait—who doesn’t want to think of themselves as creative?—and much less straightforward to assess. In keeping with Vazire’s model, we find poor correlations among self-ratings, peer ratings, and psychologists’ attempts at objective measures of creativity.

The question “am I really, truly a self-important jerk?” is highly evaluatively loaded, so you will be highly motivated to reach a favored answer: “No, of course not!” Being a jerk is also not straightforwardly observable, so you will have plenty of room to reinterpret evidence to suit: “Sure, maybe I was a little grumpy with that cashier, but she deserved it for forgetting to put my double shot in a tall cup.”

Academically intelligent people, by the way, aren’t immune to motivated reasoning. On the contrary, recent research by Dan M. Kahan of Yale University suggests that reflective and educated people might be especially skilled at rationalizing their preexisting beliefs—for example, interpreting complicated evidence about gun control in a manner that fits their political preferences.

I suspect there is a zero correlation between people’s self-opinion about their degree of jerkitude and their true overall degree of jerkitude. Some recalcitrant jerks might recognize that they are so, but others might think themselves quite dandy. Some genuine sweethearts might fully recognize how sweet they are, while others might have far too low an opinion of their own moral character.

There’s another obstacle to jerk self-knowledge, too: We don’t yet have a good understanding of the essence of jerkitude—not yet, at least. There is no official scientific designation that matches the full range of ordinary application of the term “jerk” to the guy who rudely cuts you off in line, the teacher who casually humiliates the students, and the co-worker who turns every staff meeting into a battle.

The scientifically recognized personality categories closest to “jerk” are the “dark triad” of narcissism, Machiavellianism, and psychopathic personality. Narcissists regard themselves as more important than the people around them, which jerks also implicitly or explicitly do. And yet narcissism is not quite jerkitude, since it also involves a desire to be the center of attention, a desire that jerks don’t always have. Machiavellian personalities tend to treat people as tools they can exploit for their own ends, which jerks also do. And yet this too is not quite jerkitude, since Machivellianism involves self-conscious cynicism, while jerks can often be ignorant of their self-serving tendencies. People with psychopathic personalities are selfish and callous, as is the jerk, but they also incline toward impulsive risk-taking, while jerks can be calculating and risk-averse.

Another related concept is the concept of the asshole, as explored recently by the philosopher Aaron James of the University of California, Irvine. On James’s theory, assholes are people who allow themselves to enjoy special advantages over others out of an entrenched sense of entitlement. Although this is closely related to jerkitude, again it’s not quite the same thing. One can be a jerk through arrogant and insulting behavior even if one helps oneself to no special advantages.

Given the many roadblocks standing in the way, what is a potential jerk interested in self-evaluation to do?

Find out what to do by continuing here.

-------------------------------

Coming soon: The Jerk Quiz!

Related essay: A Theory of Jerks (Aeon Magazine, Jun 4, 2014).

[image source: Paul Sableman, creative commons]

Tuesday, September 13, 2016

Momentary Man

Momentary Man has all the moral virtues. He is a man of exceptional character! He is courageous, kind, fair, open-minded, creative, honest, generous, wise, sympathetic, a good listener. He is gently self-deprecating, witty, a pleasure to be around. He has an egalitarian spirit, free of racist, sexist, classist, and ableist inclinations; he is ready to see and appreciate people for who they are in all their wondrous individuality.

He exists for exactly two seconds.

He was created by a magical act of God, or as a briefly existing future artificial intelligence, or through freak quantum accident. He thinks to himself, "Wow, it's great to be alive!" and then, as suddenly as he came into existence, he is swallowed by void.

What is it to be courageous, after all? Arguably, it's not a matter of actually doing courageous things all the time but rather a matter of being disposed or ready to do courageous things, if the situation calls for it. If danger presented itself, the courageous person would be undaunted, take the risk, face down her fears. To be courageous is not to always be acting courageously; rather it is to be prepared to act courageously if necessary. Of course we all have sufficiently complex lives that courageous action is sometimes required, and then our courage (or lack thereof) manifests itself. But the trait of being courageous, or not, was there or not there in the background of our personalities all along.

Arguably, kindness too, and open-mindedness, and all the rest, are dispositional traits. Virtues concern how you would tend in general to act in the relevant range of circumstances. If so, then Momentary Man can have all the traits I've ascribed to him, even if no situation ever arises in his life that draws out the associated actions.

Two questions:

First, does this merely dispositional approach to virtue seem right? Or do virtuous personality traits require actual manifestation in concrete action to be present in someone? Part of me wants to insist that some concrete action is required for the genuine presence of virtue. One cannot be an extravert on a desert island, no matter how much one would be the life of the party if only there were a party. Momentary Man has no virtues. But then "dispositional" approaches to personality (of the sort I favor) require clarification or modification.

A different part of me wants to say no, Momentary Man does have all these virtues; it's just a shame he cannot exist longer to manifest them.

Second, suppose that Momentary Man does indeed have all these virtues. Is the universe better for Momentary Man's having briefly existed? Is there some intrinsic value in the presence of even unexercised virtue? Or would the world have been just as good without him, or with a vicious version of him (cruel, obnoxious, greedy) who had the same two seconds of conscious experience before blinking out?

Here my inclination is to think the world is richer and better for Momentary Man's having existed. There's something wonderful about his configuration, his potentiality, even if none of his virtues are ever exercised. And if he is brief, well, so are we all.

[Cropped image from from image source]

Friday, September 09, 2016

Whirlwind Tour of New York City in October

Since I'm on vacation -- um, I mean sabbatical -- this term, I'm planning to relax by going to New York in October.

  • Oct 13, Columbia University, Society for Comparative Philosophy: "Death and Self in the Incomprehensible Zhuangzi"
  • Oct 14-15, New York University, Conference on the Ethics of Artificial Intelligence: "The Rights of Artificial Intelligences" (with Mara Garza)
  • Oct 16, Princeton University, Minorities And Philosophy mini-conference: "Encouraging Diversity in the Philosophy Classroom"
  • Oct 17-18 New York University and City University of New York guest visits to classes, one on belief and one on science fiction and philosophy
  • Maybe I can see some of you at one or more of these events. The Comparative Philosophy and Ethics of AI events are open to the public; probably also the Princeton MAP conference (though check); presumably the classes are not open. The Ethics of AI conference promises to be fairly large, with advance registration if you pay (free registration if you're willing to risk not getting a seat).

    Yes, talking about Chinese philosophy, robot rights, implicit bias in the classroom, science fiction, and the nature of mental states is what I want to do on my vacation.

    (Where I'm not going for my vacation.)

    Also this fall:

  • Sep 16, Florida State: "The Moral Behavior of Ethics Professors"
  • Sep 30, USC: "The Pragmatic Metaphysics of Belief"
  • Oct 25, UCLA, Marschak Colloquium: "The Rights of Artificial Intelligences" (with Mara Garza)
  • Nov 2, Occidental College: "Death and Self in the Incomprehensible Zhuangzi"
  • [image source]

    Thursday, September 08, 2016

    How Often Do Mainstream Anglophone Philosophers Cite Non-Anglophone Sources?

    Spoiler Alert: Not much!

    I estimate that 97% of citations in the most prestigious English-language philosophy journals are to works originally written in English. In other words, the entire history of philosophy not written in English (Plato, Confucius, Ibn Rushd, Descartes, Wang Yangming, Kant, Frege, Wittgenstein, Foucault, etc., on into the 21st century) is referenced in only 3% of the citations in leading Anglophone philosophy journals.

    Let me walk you through the process by which I came to these numbers, then give you some breakdowns.

    I examined the latest available issue of twelve highly regarded Anglophone philosophy journals (the top 12 from Brian Leiter's 2013 poll results). [Note 1] From each issue, I analyzed only the main research articles in that issue (not reviews, discussion notes, comments, symposia, etc.). This generated a target list of 93 articles -- hopefully enough to constitute a fair representation of citation practices.

    I then downloaded the reference section of each of those 93 articles, or for articles with footnotes instead of a reference section, I hand-coded the footnotes. I included only actual references to specific works. For example, the word "Kantian" would not qualify as a reference to Kant unless a specific work of Kant's is cited. For each cited work I noted its original publication year and original publication language. [Note 2]

    This generated a list of 3566 total citations to examine.

    Of the 3566 citations included in my analysis, only 90 (3%) were citations of works not originally written in English. Sixty-eight of the 93 analyzed articles (73%) cited no works that had not originally been written in English. Eleven (12%) cited exactly one non-English work, either in its original language or in English translation. Fourteen (15%) cited at least two works originally published in a language other than English. The only source languages other than English were ancient Greek, Latin, German, French, and Italian. No African, Arabic, Chinese, Indian, or Spanish-language works were cited in this sample.

    Sometime after World War Two, English became the common language of most scholarship intended for an international audience, even when the writer's native language is not English. English-language articles citing only recent sources might therefore be expected to cite almost exclusively English-language sources. With this idea in mind, I divided the data into four time periods: ancient through 1849, 1850-1945, 1946-1999, and 2000-present.

    The breakdown:

  • Ancient through 1849: 51/63 (81%) non-English
  • 1850-1945: 30/91 (33%) non-English
  • 1946-1999: 8/1236 (1%) non-English
  • 2000-present: 1/2166 (0%) non-English
  • Obviously, there's a huge skew toward more recent work -- but even in the 1850-1945 category two-thirds of the citations in this sample are to works originally written in English.

    In my own writing, I also cite mostly English-language works. It's the tradition I operate in, and although I have some reading practice in French, Spanish, German, and classical Chinese, untranslated works are always a struggle. I don't intend to be too judgmental or blaming. But it does seem likely that the Anglophone philosophical tradition would benefit from more engagement with works not originally written in English.

    ----------------------------------

    Note 1: The journals were: Philosophical Review, Journal of Philosophy, Nous, Mind, Philosophy & Phenomenological Research, Ethics, Philosophical Studies, Australasian Journal of Philosophy, Philosopher's Imprint, Analysis, Philosophical Quarterly, and Philosophy & Public Affairs. This list has surface plausibility as a list of the best-regarded journals in mainstream Anglophone philosophy. Philosophers Imprint publishes rarely and sporadically, so I just used all of 2016 up to Sep 7.

    Note 2: In some cases only the date of a recent edition was listed. In these cases I estimated publication year based on my knowledge of the history of philosophy. In some cases, only the English-language title was given -- and again I estimated the original language based on my knowledge of the history of philosophy. It is possible that I misclassified a few works in this way. However, for the estimate to rise to 3.5%, I would have to have misclassified 35 non-English works as English, which I believe is unlikely. (By the way, for these purposes, Web of Science is full of relevant mistakes. This more labor-intensive approach yields much cleaner results.)

    ----------------------------------

    Related Posts:

    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (Aug 14, 2014)

    The Ghettoization of Nietzsche (Aug 23, 2012)

    ----------------------------------

    [image source]

    Tuesday, August 30, 2016

    Direction and Misdirection in the First Sentence of a Story

    Story writers love first sentences. Probably more time goes into crafting first sentences than any other sentence, even the last. Already in the first few words the author is conveying tone, style, and mood, and usually also making a start on character, setting, and theme. That's a lot to do! The reader is already absorbing all of these things. Of course one must start on the right path.

    What about plot? You might think plot is the one major story element the first sentence doesn't need to establish. Plot is necessarily spread across the story -- a matter of how things change away from what is established the first sentence. Except in unusual cases, you might think, the outcome of the story isn't already there to be seen in the first sentence.

    Aliette de Bodard, Ann Leckie, Cati Porter, Rachel Swirsky, and I decided to try an experiment: Guess the plots of five new stories based on the first sentence alone. We chose the stories from July’s issue of Lightspeed Magazine. Although it seems impossible to fully guess the plot from the first sentence (else where would the suspense be?), to the extent the first sentence already sets up the plot, our guesses might not be entirely off target.

    Here are the five stories and their first sentences:

    1. "Magnifica Angelica Superable" by Rochita Loenen-Ruiz:

    A woman from the street came in laughing from the cold.

    2. "The One Who Isn't" by Ted Kosmatka:

    It starts with light.

    3. "Some Pebbles in the Palm" by Kenneth Schneyer:

    Once upon a time, there was a man who was born, who lived, and who died.

    4. "5x5" by Jilly Dreadful:

    Sugarloaf Fine Sciences Summer Camp
    Bunk Note: Cabin Lamarr
    07.12

    Dear Scully,
    I should've been suspicious of the girl in the lab coat offering me psychic ice cream.

    5. "The Child Support of Cromdor the Condemned" by Spencer Ellsworth:

    Cromdor the Caldernian, thrice-condemned, (I've forgotten the rest, but believe you me, there is thrice more) had nearly finished his tale when the traveler slipped in.

    The details of our guesses are here, here, here, here, and here.

    At the end of the exercise, I was struck by three main things about these first sentences.

    First: As expected, all the first sentences do set up a tone, style, and mood (1 is spunky, 2 is serious and straight, 3 is metafictional and preachy, 4 and 5 are lighthearted and funny). Character, setting, and theme are also off to a clear start in most. In 1, 4, and 5, Angelica, the girl in the lab coat, and Cromdor are starting to take shape. In 2, 4, and 5, we begin to see the lit but undefined space, the summer camp, the epic fantasy world. Sentences 1, 2, and 3 open the themes of responding to adversity, of beginnings, of life cycles. Most have hooks: The "psychic ice cream" of 4 is a great tease. In 5, the author has so efficiently sketched setting and character that already by the end of the first sentence, I'm wondering how the traveler will disrupt things. In 3, I'm intrigued by the strange abstractness.

    Second: Somewhat to my surprise we actually weren't too bad at guessing plot. That doesn't mean we were good, but usually at least one of the five of us seemed already to have been able to guess something of the arc of the story. Story 2 would be a creation story with metaphysical themes and a dark ending. Story 4 would focus on the deepening relationship of the narrator and Scully. Story 5 would be about an old warrior's past family catching up with him.

    Third: Most of these stories also have a bit of misdirection in the first sentence. This is clearest in Story 4: We all thought the "psychic ice cream" would be important to the plot -- but it wasn't. It launches us, and it works great for hook, tone, character, and setting, but the plot doesn't turn on it. In Story 5, we thought the three condemnations of Cromdor would be important, but again though it helps efficiently set character, setting, and tone, it doesn't matter to plot in the way we had guessed. In Story 1, where Angelica came in to doesn't matter as much as we'd thought.

    So I wonder about this misdirection. It is a feature or a bug?

    I want to say feature. These are good stories, in a top magazine. I liked them all, and they all stood up to close rereading. Why would the author misdirect us? Maybe it prevents us from too fully guessing what's coming, keeping the surprise, keeping us on our toes. Maybe it also makes the worlds richer, suggesting elements unmentioned or unexplored, pointing outside the frame of a story that might otherwise be too tidy.

    So now I'm curious -- do famous, classic SF stories also tend to have this oblique entry or partial misdirection in the first sentence? I've arbitrarily chosen four personal favorites that are widely celebrated. Before this exercise, and up to this point in writing this post, I had no memory of exactly how these four stories began. As of now, the outcome is as much a mystery to me as to you.

    So... here goes:

    "Flowers for Algernon" by Daniel Keyes:

    Progris riport 1 -- martch 5 1965

    Mr. Strauss says I shud rite down what I think and evrey thing that happins to me from now on.

    "The Ones Who Walk Away from Omelas" by Ursula K. Le Guin:

    With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.

    "The Paper Menagerie" by Ken Liu:

    One of my earliest memories starts with me sobbing.

    "Bloodchild" by Octavia Butler:

    My last night of childhood began with a visit home.

    Hm! I'd say almost no misdirection in these first sentences. "Flowers for Algernon" is of course the tragic story of a psychological experiment in which a man with low IQ is given an intelligence enhancement. "Omelas" is the story of people refusing to live in a beautiful city built on a terrible crime. "Paper Menagerie" is the sad story of a boy alienated from his mother's culture, failing to appreciate her magic. "Bloodchild" is about a parasitic alien species using human boys as hosts. The first sentences of these famous stories are tightly focused on theme, setting, and entry into the plot, taking us right there without misdirection.

    The sample is too small. I am going to have to read more, with this issue in mind. Of course, there is more than one way for a story to work. Maybe I've chosen stories with a driving philosophical point, rather than with looser arcs, just because that is my taste....

    ETA: As Ann Leckie suggests, another possibility is that the misdirection is less obvious to me in the four classic stories because I already knew how they would go and wasn't on the hook for a first guess.

    [image source]