Thursday, November 16, 2017

A Moral Dunning-Kruger Effect?

In a famous series of experiments Justin Kruger and David Dunning found that people who scored in the lowest quartile of skill in grammar, logic, and (yes, they tried to measure this) humor tended to substantially overestimate their abilities, rating themselves as a bit above average in these skills. In contrast, people in the top half of ability had more accurate estimations (even tending to underestimate a bit). The average participant in each quartile rated themselves as above average, and the correlation between self-rated skill and measured skill was small.

For example, here's Kruger and Dunning's chart for logic ability and logic scores:


(Kruger & Dunning 1999, p. 1129).

Kruger and Dunning's explanation is that poor skill at (say) logical reasoning not only impairs one's performance at logical reasoning tasks but also impairs one's ability to evaluate one's own performance at logical reasoning tasks. You need to know that affirming the consequent is a logical error in order to realize that you've just committed a logical error in affirming the consequent. Otherwise, you're likely to think, "P implies Q, not-P, so not-Q. Right! Hey, I'm doing great!"

Although popular presentations of the Kruger-Dunning effect tend to generalize it to all skill domains, it seems unlikely that it does generalize universally. In domains where evaluating one's success doesn't depend on the skill in question, and instead depends on simpler forms of observation and feedback, one might expect more realistic self-evaluations by novices. (I haven't noticed a clear, systematic discussion of cases where Dunning-Kruger doesn't apply, though Kahneman & Klein 2009 is related; tips welcome.) For example: footraces. I'd wager that people who are slow runners don't tend to think that they are above average in running speed. They might not have perfect expectations; they might show some self-serving optimistic bias (Taylor & Brown 1988), but we probably won't see the almost flat line characteristic of Kruger-Dunning. You don't have to be a fast runner to evaluate your running speed. You just need to notice that others tend to run faster than you. It's not like logic where skill at the task and skill at self-evaluation are closely related.

So... what about ethics? Ought we to expect a moral Dunning-Kruger Effect?

My guess is: yes. Evaluating one's own ethical or unethical behavior is a skill that itself depends on one's ethical abilities. The least ethical people are typically also the least capable of recognizing what counts as an ethical violation and how serious the violation is -- especially, perhaps, when thinking about their own behavior. I don't want to over-commit on this point. Certainly there are exceptions. But as a general trend, this strikes me as plausible.

Consider sexism. The most sexist people tend to be the people least capable of understanding what constitutes sexist behavior and what makes sexist behavior unethical. They will tend either to regard themselves as not sexist or to regard themselves only as "sexist" in a non-pejorative sense. ("Yeah, so what, I'm a 'sexist'. I think men and women are different. If you don't, you're a fool.") Similarly, the most habitual liars might not see anything bad in lying or just assume that everyone else who isn't just a clueless sucker also lies when convenient.

It probably doesn't make sense to think that overall morality can be accurately captured in a single unidimensional scale -- just like it probably doesn't make sense to think that there's one correct unidimensional scale for skill at baseball or for skill as a philosopher or for being a good parent. And yet, clearly some baseball players, philosophers, and parents are better than others. There are great, good, mediocre, and crummy versions of each. I think it's okay as a first approximation to think that there are more and less ethical people overall. And if so, we can at least imagine a rough scale.

With that important caveat, then, consider the following possible relationships between one's overall moral character and one's opinion about one's overall moral character:

Dunning-Kruger (more self-enhancement for lower moral character):

[Note: Sorry for the cruddy-looking images. They look fine in Excel. I should figure this out.]

Uniform self-enhancement (everyone tends to think they're a bit better than they are):

U-shaped curve (even more self-enhancement for the below average):

Inverse U (realistically low self-image for the worst, self-enhancement in the middle, and self-underestimation for the best):

I don't think we really know which of these models is closest to the truth.

Thursday, November 09, 2017

Is It Perfectly Fine to Aim to be Morally Average?

By perfectly fine I mean: not at all morally blameworthy.

By aiming I mean: being ready to calibrate ourselves up or down to hit the target. I would contrast aiming with settling, which does not necessarily involve calibrating down if one is above target. (For example, if you're aiming for a B, then you should work harder if you get a C on the first exam and ease up if you get an A on the first exam. If you're willing to settle for a B, then you won't necessarily ease up if you happen fortunately to be headed toward an A.)

I believe that most people aim to be morally mediocre, even if they don't explicitly conceptualize themselves that way. Most people look at their peers' moral behavior, then calibrate toward so-so, wanting neither to be among the morally best (with the self-sacrifice that seems to involve) nor among the morally worst. But maybe "mediocre" is too loaded a word, with its negative connotations? Maybe it's perfectly fine, not at all blameworthy, to aim for the moral middle?


Here's one reason you might think so:

The Fairness Argument.

Let's assume (of course it's disputable) that being among the morally best, relative to your peers, normally involves substantial self-sacrifice. It's morally better to donate large amounts to worthy charities than to donate small amounts. It's morally better to be generous rather than stingy with one's time in helping colleagues, neighbors, and distant relatives who might not be your favorite people. It's morally better to meet your deadlines than to inconvenience others by running late. It's morally better to have a small carbon footprint than a medium-size or large one. It's morally better not to lie, cheat, and fudge in all the small (and sometimes large) ways that people tend to do.

To be near the moral maximum in every respect would be practically impossible near-sainthood; but we non-saints could still presumably be somewhat better in many of these ways. We just choose not to be better, because we'd rather not make the sacrifices involved. (See The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot for my discussion of a couple of ways of insisting that you couldn't be morally better than you in fact are.)

Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to do so. If the average person in your financial condition gives 3% of their income to charity, then it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, fib, and flake X amount of the time, it's only fair that you get to do the same. Fairness requires that we demand no more than average moral sacrifice from the average person. Thus, there's nothing wrong with aiming to be only a middling member of the moral community -- approximately as selfish, dishonest, and unreliable as everyone else.


Two Replies to the Fairness Argument.

(1.) Absolute standards. Some actions are morally bad, even if the majority of your peers are doing them. As an extreme example, consider a Nazi death camp guard in 1941, who is somewhat kinder to the inmates and less enthusiastic about killing than the average death camp guard, but who still participates in and benefits from the system. "Hey, at least I'm better than average!" is a poor excuse. More moderately, most people (I believe) regularly exhibit small to moderate degrees of sexism, racism, ableism, and preferential treatment of the conventionally beautiful. Even though most people do this, one remains criticizable for it -- that you're typical or average in your degree of bias is at most a mitigator of blame, not a full excuser from blame. So although some putative norms might become morally optional (or "supererogatory") if most of your peers fail to comply, others don't show that structure. With respect to some norms, aiming for mediocrity is not perfectly fine.

(2.) The seeming-absurdity of trade offs between norm types. Most of us see ourselves as having areas of moral strength and weakness. Maybe you're a warm-hearted fellow, but flakier than average about responding to important emails. Maybe you know you tend to be rude and grumpy to strangers, but you're an unusually active volunteer for good causes in your community. My psychological conjecture is that, in implicitly guiding our own behavior, we tend to treat these tradeoffs as exculpatory or licensing: You forgive yourself for the one in light of the other. You let your excellence in one area justify lowering your aims in another, so that averaging the two, you come out somewhere in the middle. (In these examples, I'm assuming that you didn't spend so much time and energy on the one that the other becomes unfeasible. It's not that you spent hours helping your colleague so that you simply couldn't get to your email.)

Although this is tempting reasoning when you're motivated to see yourself (or someone else) positively, a more neutral judge might tend to find it strange: "It's fine that I insulted that cashier, because this afternoon I'm volunteering for river clean-up." "I'm not criticizable for neglecting Cameron's urgent email because this morning I greeted Monica and Britney kindly, filling the office with good vibes." Although non-consciously or semi-consciously we tend to cut ourselves slack in one area when we think about our excellence in others, when the specifics of such tradeoffs are brought to light, they often don't stand scrutiny.


Conclusion.

It's not perfectly fine to aim merely for the moral middle. Your peers tend to be somewhat morally criticizable; and if you aim to be the same, you too are somewhat morally criticizable for doing so. The Fairness Argument doesn't work as a general rule (though it may work in some cases). If you're not aiming for moral excellence, you are somewhat morally blameworthy for your low moral aspirations.

[image source]

Thursday, November 02, 2017

Two Roles for Belief Attribution

Belief attribution, both in philosophy and in ordinary language, normally serves two different types of role.

One role is predicting, tracking, or reporting what a person would verbally endorse. When we attribute belief to someone we are doing something like indirect quotation, speaking for them, expressing what we think they would say. This view is nicely articulated in (the simple versions of) the origin-myths of belief talk in the thought experiments of Wilfrid Sellars and Howard Wettstein, according to which belief attribution mythologically evolves out of a practice of indirect quotation or imagining interior analogues of outward speech. The other role is predicting and explaining (primarily) non-linguistic behavior -- what a person will do, given their background desires (e.g. Dennett 1987; Fodor 1987; Andrews 2012) .

We might call the first role testimonial, the second predictive-explanatory. In adult human beings, when all goes well, the two coincide. You attribute to me the belief that class starts at 2 pm. It is true both that I would say "Class starts at 2 pm" and that I would try to show up for class at 2 pm (assuming I want to attend class).

But sometimes the two roles come apart. For example, suppose that Ralph, a philosophy professor, sincerely endorses the statement "women are just as intelligent as men". He will argue passionately and convincingly for that claim, appealing to scientific evidence, and emphasizing how it fits the egalitarian and feminist worldview he generally endorses. And yet, in his day-to-day behavior Ralph tends not to assume that women are very intellectually capable. It takes substantially more evidence, for example, to convince him of the intelligence of an essay or comment by a woman than a man. When he interacts with cashiers, salespeople, mechanics, and doctors, he tends to assume less intelligence if they are women than if they are men. And so forth. (For more detailed discussion of these types of cases, see here and here.) Or consider Kennedy, who sincerely says that she believes money doesn't matter much, above a certain basic income, but whose choices and emotional reactions seem to tell a different story. When the two roles diverge, should belief attribution track the testimonial or the predictive-explanatory? Both? Neither?

Self-attributions of belief are typically testimonial. If we ask Ralph whether he believes that women and men are equally intelligent, he would presumably answer with an unqualified yes. He can cite the evidence! If he were to say that he doesn't really believe that, or that he only "kind of" believes it, or that he's ambivalent, or that only part of him believes it, he risks giving his conversational partner the wrong idea. If he went into detail about his spontaneous reactions to people, he would probably be missing the point of the question.

On the other hand, consider Ralph's wife. Ralph comes home from a long day, and he finds himself enthusiastically talking to his wife about the brilliant new first-year graduate students in his seminar -- Michael, Nestor, James, Kyle. His wife asks, what about Valery and Svitlana? [names selected by this random procedure] Ah, Ralph says, they don't seem quite as promising, somehow. His wife challenges him: Do you really believe that women and men are equally intelligent? It sure doesn't seem that way, for all your fine, egalitarian talk! Or consider what Valery and Svitlana might say, gossiping behind Ralph's back. With some justice, they agree that he doesn't really believe that women and men are equally intelligent. Or consider Ralph many years later. Maybe after a long experience with brilliant women as colleagues and intellectual heroes, he has left his implicit prejudice behind. Looking back on his earlier attitudes, his earlier evaluations and spontaneous assumptions, he can say: Back then, I didn't deep-down believe that women were just as smart as men. Now I do believe that. Not all belief attribution is testimonial.

It is a simplifying assumption in our talk of "belief" that these two roles of belief attribution -- the testimonial and the predictive-explanatory -- converge upon a single thing, what one believes. When that simplifying assumption breaks down, something has to give, and not all of our attributional practices can be preserved without modification.

[This post is adapted from Section 6 of my paper in draft, "The Pragmatic Metaphysics of Belief"]

[HT: Janet Levin.]

[image source]

Tuesday, October 31, 2017

Rationally Speaking: Weird Ideas and Opaque Minds

What a pleasure and an honor to have been invited back to Julia Galef's awesome podcast, Rationally Speaking!

If you don't know Rationally Speaking, check it out. The podcast weaves together ideas and guests from psychology, philosophy, economics, and related fields; and Julia has a real knack for the friendly but probing question.

In this episode, Julia and I discuss the value of truth, daringness, and wonder as motives for studying philosophy; the hazards of interpreting other thinkers too charitably; and our lack of self-knowledge about the stream of conscious experience.

Thursday, October 26, 2017

In 25 Years, Your Employer Will Directly Control Your Moods

[Edit Oct. 28: After discussion with friends and commenters in social media, I now think that the thesis should be moderated in two ways. First, before direct mood control becomes common in the workplace, it probably first needs to become voluntarily common at home; and thus it will probably take more than 25 years. Second, it seems likely that in many (most?) cases the direct control will remain in the employee's hands, though there will likely be coercive pressure from the employer to use it as the employer expects. (Thanks to everyone for their comments!)]

Here's the argument:

(1.) In 25 years, employers will have the technological capacity to directly control their employees' moods.

(2.) Employers will not refrain from exercising that capacity.

(3.) Most working-age adults will be employees.

(4.) Therefore, in 25 years, most working-age adults will have employers who directly control their moods.

The argument is valid in the sense that the conclusion (4) follows if all of the premises are true.

Premise 1 seems plausible, given current technological trajectories. Control could be either pharmacological or via direct brain stimulation. Pharmacological control could, for example, be through pills that directly influence your mood, energy levels, ability to concentrate, feeling of submissiveness, or passion for the type of task at hand. Direct brain stimulation could be through a removable TMS helmet that magnetically stimulates and suppresses neural activity in different brain regions, or with some more invasive technology. McDonald's might ask its cashiers to tweak their dials toward perky friendliness. Data entry centers might ask their temp workers to tweak their dials toward undistractable focus. Brothels might ask their strippers to tweak their dials toward sexual arousal.

Contra Premise 1, society might collapse, of course, or technological growth could stall or proceed much more slowly. If it's just slower, then we can replace "25 years" with 50 or 100 and retain the rest of the argument. It seems unlikely that moods are too complex or finicky to be subject to fairly precise technological control, given how readily they can be influenced by low-tech means.

I don't know to what extent people in Silicon Valley, Wall Street, and elite universities already use high-tech drugs to enhance alertness, energy, and concentration at work. That might already be a step down this road. Indeed, coffee might partly be seen this way too, especially if you use it to give your all to work, and then collapse in exhaustion when the caffeine wears off and you arrive home. My thought is that in a few decades the interventions might be much more direct, effective, and precisely targeted.

Premise 2 also seems plausible, given the relative social power of employers vs. employees. As long as there's surplus labor and a scarcity of desirable jobs, then employers will have some choice about whom to hire. If Starbucks has a choice between Applicant A who is willing to turn up the perky-friendly dial and otherwise similar Applicant B who is not so willing, then they will presumably tend to prefer Applicant A. If the Silicon Valley startup wants an employee who will crank out intense 16-hour days one after the next, and the technology is available for people to do so by directly regulating their moods, energy levels, focus, and passion, then the people who take that direct control, for their employers' benefit, will tend to win the competition for positions. If Stanford wants to hire the medical researcher who is publishing article after article, they'll find the researcher who dialed up her appetite for work and dialed down everything else.

Employees might yield control directly to the employer: The TMS dials might be in the boss's office, or the cafeteria lunch might include the pharmacological cocktail of the day. Alternatively, employees might keep the hand on the dial themselves, but experience substantial pressure to manipulate it in the directions expected by the employer. If that pressure is high enough and effective enough, then it comes to much the same thing. (My guess is that lower-prestige occupations (the majority) would yield control directly to the employer, while higher-prestige occupations would retain the sheen of self-control alongside very effective pressure to use that "self-control" in certain ways.)

Contra Premise 2, (a.) collective bargaining might prevent employers from successfully demanding direct mood control; or (b.) governmental regulations might do so; or (c.) there might be a lack of surplus labor.

Rebuttal to (a): The historical trend recently, at least in the U.S., has been against unionization and collective bargaining, though I guess that could change.

Rebuttal to (b): Although government regulations could forbid certain drugs or brain technologies, if there's enough demand for those drugs or technologies, employees will find ways to use them (unless enforcement gets a lot of resources, as in professional sports). Government regulations could specifically forbid employers from requiring that their employees use certain technologies, while permitting such technologies for private use. (No TMS helmets on the job.) But enforcement might again be difficult; and private use vs. use as an employee is a permeable line for the increasing number of jobs that involve working outside of a set time and location. Also, it's easier to regulate a contractual demand than an informal de facto demand. Presumably many companies could say that of course they don't require their employees to use such technologies. It's up to the employee! But if the technology delivers as promised, the employees who "voluntarily choose" to have their moods directly regulated will be more productive and otherwise behave as the company desires, and thus be more attractive to retain and promote.

Rebuttal to (c): At present there's no general long-term trend toward a shortage of labor; and at least for jobs seen as highly desirable, there will always be more applicants than available positions.

Premise 3 also seems plausible, especially on a liberal definition of "employee". Most working-age adults (in Europe and North America) are currently employees of one form or another. That could change substantially with the "gig economy" and more independent contracting, but not necessarily in a way that takes the sting out of the main argument. Even if an Uber driver is technically not an employee, the pressures toward direct mood control for productivity ought to be similar. Likewise for computer programmers and others who do piecework as independent contractors. If anything, the pressures may be higher, with less security of income and fewer formal workplace regulations.

Thinking about Premises 1-3, I find myself drawn to the conclusion that my children's and grandchildren's employers are likely to have a huge amount of coercive control over their moods and passions.

-------------------------------------

Related:

"What Would (or Should) You Do with Administrator Access to Your Mind?" (guest post by Henry Shevlin, Aug 16, 2017).

"Crash Space" (a short story by R. Scott Bakker for Midwest Studies in Philosophy).

"My Daughter's Rented Eyes" (Oct 11, 2016).

[image source: lady-traveler, creative commons]

Thursday, October 19, 2017

Practical and Impractical Advice for Philosophers Writing Fiction

Hugh D. Reynolds has written up a fun, vivid summary of my talk at Oxford Brookes last spring, on fiction writing for philosophers!

-----------------------------------

Eric Schwitzgebel has a pleasingly liberal view of what constitutes philosophy. A philosopher is anyone wrestling with the “biggest picture framing issues” of... well, anything.

In a keynote session at the Fiction Writing for Philosophers Workshop that was held at Oxford Brookes University in June 2017, Schwitzgebel, Professor of Philosophy at the University of California, Riverside, shared his advice–which he stated would be both practical and impractical.

Schwitzgebel tells us of a leading coiffeur who styles himself as a “Philosopher of Hair”. We laugh – but there’s something in this – the vagary, the contingency in favoured forms of philosophical output. And it’s not just hairdressers that threaten to encroach upon the Philosophy Department’s turf. Given that the foundational issues in any branch of science or art are philosophical in nature, it follows that most people “doing” philosophy today aren’t professional philosophers.

There are a host of ways one could go about doing philosophy, but of late a consensus has emerged amongst those that write articles for academic journals: the only proper way to “do” philosophy is by writing articles for academic journals. Is it time to re-stock the tool shed? Philosophical nuts come in all shapes and sizes; yet contemporary attempts to crack them are somewhat monotone.

As Schwitzgebel wrote in a Los Angeles Times op-ed piece:

Too exclusive a focus on technical journal articles excludes non-academics from the dialogue — or maybe, better said, excludes us philosophers from non-academics’ more important dialogue.

[Hugh's account of my talk continues here.]

-----------------------------------

Thanks also to Helen De Cruz for setting up the talk and to Skye Cleary for finding a home for Hugh's account on the APA blog.

[image detail from APA Blog]

Tuesday, October 17, 2017

Should You Referee the Same Paper Twice, for Different Journals?

Uh-oh, it happened again. That paper I refereed for Journal X a few months ago -- it's back in my inbox. Journal X rejected it, and now Journal Y wants to know what I think. Would I be willing to referee it for Journal Y?

In the past, I've tended to say no if I had previously recommended rejection, yes if I had previously recommended acceptance.

If I'd previously recommended rejection, I've tended to reason thus: I could be mistaken in my negative view. It would be a disservice both to the field in general and to the author in particular if a single stubborn referee prevented an excellent paper from being published by rejecting it again and again from different journals. If the paper really doesn't merit publication, then another referee will presumably reach the same conclusion, and the paper will be rejected without my help.

If I'd previously recommended acceptance (or encouraging R&R), I've tended to just permit myself think that the other journal's decision was probably the wrong call, and it does no harm to the field or to the author for me to serve as referee again to help this promising paper find the home it deserves.

I've begun to wonder whether I should just generally refuse to referee the same paper more than once for different journals, even in positive cases. Maybe if everyone followed my policy, that would overall tend to harm the field by skewing the referee pool too much toward the positive side?

I could also imagine arguments -- though I'm not as tempted by them -- that it's fine to reject the same paper multiple times from different journals. After all, it's hard for journals to find expert referees, and if you're confident in your opinion, you might as well share it widely and save everyone's time.

I'd be curious to hear about others' practices, and their reasons for and against.

(Let's assume that anonymity isn't an issue, having been maintained throughout the process.)

[Cross-posted at Daily Nous]

Monday, October 16, 2017

New Essay in Draft: Kant Meets Cyberpunk

Abstract:

I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that things necessarily appear to us. It might seem unlikely that we are living in a virtual reality instantiated on a non-spatial computer. However, understanding this possibility can help us appreciate the merits of transcendental idealism in general, as well as transcendental idealism's underappreciated skeptical consequences.

Full essay here.

As always, I welcome comments, objections, and discussion either as comments on this post or by email to my UCR email address.

Thursday, October 12, 2017

Truth, Dare, and Wonder

According to Nomy Arpaly and Zach Barnett, some philosophers prefer Truth and others prefer Dare. I love the distinction. It helps us see an important dynamic in the field. But it's not exhaustive. I think there are also Wonder philosophers.

As I see the distinction, Truth philosophers sincerely aim to present the philosophical truth as they see it. They tend to prefer modest, moderate, and commonsensical positions. They tend to recognize the substantial truth in multiple different perspectives (at least once they've been around long enough to see the flaws in their youthful enthusiasms), and thus tend to prefer multidimensionality and nuance. Truth philosophers would rather be boring and right than interesting and wrong.

Dare philosophers reach instead for the bold and unusual. They want to explore the boundaries of what can be defended. They're happy for the sake of argument to champion unusual positions that they might not fully believe, if those positions are elegant, novel, fun, contrarian, or if they think the positions have more going for them than is generally recognized. Dare philosophers sometimes treat philosophy like a game in which the ideal achievement is the breathtakingly clever defense of a position that others would have thought to be patently absurd.

There's a familiar dynamic that arises from their interaction. The Dare philosopher ventures a bold thesis, cleverly defended. ("Possible worlds really exist!", "All matter is conscious!", "We're morally obliged to let humanity go extinct!") If the defense is clever enough, so that a substantial number of readers are tempted to think "Wait, could that really be true? What exactly is wrong with the argument?" then the Truth philosopher steps in. The Truth philosopher finds the holes and presuppositions in the argument, or at least tries to, and defends a more seemingly sensible view.

This Dare-and-Truth dynamic is central to the field and good for its development. Sometimes there's more truth in the Dare positions than one would have thought, and without the Dare philosophers out there pushing the limits, seeing what can be said in defense of the seemingly absurd, then as a field we wouldn't appreciate those positions as vividly as we might. Also, I think, there's something intrinsically valuable about exploring the boundaries of philosophical defensibility, even if the positions explored turn out to be flatly false. It's part of the magnificent glory of life on Earth that we have fiendishly clever panpsychists and modal realists in our midst.

Now consider Wonder.

Why study philosophy? I mean at a personal level. Personally, what do you find cool, interesting, or rewarding about philosophy? One answer is Truth: Through philosophy, you discover answers to some of the profoundest and most difficult questions that people can pose. Another answer is Dare: It's fun to match wits, push arguments, defend surprising theses, win the argumentative game (or at least play to a draw) despite starting from a seemingly indefensible position. Both of those motivations speak to me somewhat. But I think what really delights me more than anything else in philosophy is its capacity to upend what I think I know, its capacity to call into question what I previously took for granted, its capacity to cast me into doubt, confusion, and wonder.

Unlike the Dare philosopher, the Wonder philosopher is guided by a norm of sincerity and truth. It's not primarily about matching wits and finding clever arguments. Unlike the Truth philosopher, the Wonder philosopher has an affection for the strange and seemingly wrong -- and is willing to push wild theses to the extent they suspect that those theses, wonderfully, surprisingly, might be true.

But in the Dare-and-Truth dynamic of the field, the Wonder philosopher can struggle to find a place. Bold Dare articles and sensible Truth articles both have a natural home in the journals. But "whoa, I wonder if this weird thing might be true?" is a little harder to publish.

Probably no one is pure Truth, pure Dare, or pure Wonder. We're all a mix of the three, I suspect. Thus, one approach is to leave Wonder out of your research profile: Find the Truth, where you can, publish that, and leave Wonder for your classroom teaching and private reading. Defend the existence of moderate naturalistically-grounded moral truths in your published papers; read Zhuangzi on the side.

Still, there are a few publishing strategies for Wonder philosophers. Here are four:

(1.) Find a Dare-like position that you really do sincerely endorse on reflection, and defend that -- optionally with some explicit qualifications indicating that you are exploring it only as a possibility.

(2.) Explicitly argue that we should invest a small but non-trivial credence in some Dare-like position -- for example, because the Truth-type arguments against it aren't fully compelling.

(3.) Find a Truth-like view that generates Wonder if it's true. For example, defend some form of doubt about philosophical method or about the extent of our self-knowledge. Defend the position on sensible, widely acceptable grounds; and then sensibly argue that one possible consequence is that we don't know some of the things that we normally take for granted that we do know.

(4.) Write about historical philosophers with weird and wonderful views. This gives you a chance to explore the Wonderful without committing to it.

In retrospect, I think one unifying theme in my disparate work is that it fits under one of these four heads. Much of my recent metaphysics fits under (1) or (2) (e.g., here, here, here). My work on belief and introspection mostly fits under (3) (with some (1) in my bolder moments): We can't take for granted that we have the handsome beliefs (e.g., "the sexes are intellectually equal") that we think we do, or that we have the moral character or types of experience that we think we do. And my interest in Zhuangzi and some of the stranger corners of early introspective psychology fits under (4).

Friday, October 06, 2017

Do Philosophy Professors Tend to Come from Socially Elite Backgrounds?

To judge from the examples we use in our essays, we philosophers are a pretty classy bunch. Evidently, philosophers tend to frequent the theater, delight in expensive wines, enjoy novels by George Eliot, and regret owning insufficiently many boats. Ah, the life of the philosopher, full of deep thoughts about opera while sipping Château Latour and lingering over 19th-century novels on your yacht!

Maybe it's true that philosophers typically come from wealthy or educationally elite family backgrounds? Various studies suggest that lower-income students and first-generation college students in the U.S. and Britain are more likely to choose what are sometimes perceived as lower risk, more "practical" majors like engineering, the physical sciences, and education, than they are to choose arts and humanities majors.

To explore this question, I requested data from the National Science Foundation's Survey of Earned Doctorates. The SED collects demographic and other data from PhD recipients from virtually all accredited universities in the U.S., typically with response rates over 90%.

I requested data on two relevant SED questions:

  • What is the highest educational attainment of your mother and father?
  • and also, since starting at community college is generally regarded as a less elite educational path than going directly from high school to a four-year university,

  • Did you earn college credit from a community or two-year college?
  • Before you read on... any guesses about the results?


    Community college attendance.

    Philosophy PhD recipients [red line below] were less likely than PhD recipients overall [black line] to have attended community college, but philosophers might actually be slightly more likely than other arts and humanities majors to have attended community college [blue line]:

    [click picture for clearer image]

    [The apparent jump from 2003 to 2004 is due to a format change in the question, from asking the respondent to list all colleges attended (2003 and earlier) to asking the yes or no question above (2004 and after).]

    Merging the 2004-2015 data for analysis, 17% of philosophy PhD recipients had attended community college, compared to 15% of other arts and humanities PhDs and 19% of PhDs overall. Pairwise comparisons: philosophy 696/4107 vs. arts & humanities overall (excl. phil.) 7051/45966 (z = 2.7, p = .006); vs. all PhD recipients (excl. phil.) 69958/372985 (z = -3.0, p = .003).

    The NSF also sent me the breakdown by race, gender, and ethnicity. I found no substantial differences by gender. Non-Hispanic white philosophy PhD recipients may have been a bit less likely to have attended community college than the other groups (17% vs. 21%, z = -2.2, p = .03) -- actually a somewhat smaller effect size than I might have predicted. (Among PhD recipients as a whole, Asians were a bit less likely (14%) and Hispanics [any race] a bit more likely (25%) to have attended community college than whites (20%) and blacks (19%).)

    In sum, as measured by rates of community college attendance, philosophers' educational background is only a little more elite than that of PhD recipients overall and might be slightly less elite, on average, than that of PhD recipients in the other arts and humanities.


    Parental Education.

    The SED divides parental education levels into four categories: high school or less, some college, bachelor's degree, or advanced degree.

    Overall, recipients reported higher education levels for their fathers (35% higher degree, 25% high school or less [merging 2010-2015]) than for their mothers (25% and 31% respectively). Interestingly, women PhD recipients reported slightly higher levels of maternal education than did men, while women and men reported similar levels of paternal education, suggesting that a mother's education is a small specific predictor of her daughter's educational attainment. (Among women PhD recipients [in all fields, 2010-2015], 27% report their mothers having a higher degree and 29% report high school or less; for men the corresponding numbers are 24% and 33%.)

    Philosophers report higher levels of parental education than do other PhD recipients. In 2010-2015, 45% of philosophy PhD recipients reported having fathers with higher degrees and 33% reported having mothers with higher degrees, compared to 43% and 31% in the arts and humanities generally and 35% and 25% among all PhD recipients (philosophers' fathers 1129/2509 vs. arts & humanities' fathers (excl. phil.) 11110/26064, z = 2.3, p = .02; philosophers' mothers 817/2512 vs. a&h mothers 8078/26176, z = 1.7, p = .09). Similar trends for earlier decades suggest that the small difference between philosophy and the remaining arts and humanities is unlikely to be chance.

    [click picture for clearer image]

    Although philosophy has a higher percentage of men among recent PhDs (about 72%) than do most other disciplines outside of the physical sciences and engineering, this fact does not appear to explain the pattern. Limiting the data either to only men or only women, the same trends remain evident.

    Recent philosophy PhD recipients are also disproportionately non-Hispanic white (about 85%) compared to most other academic disciplines that do not focus on European culture. It is possible that this explains some of the tendency toward higher parental educational attainment among philosophy PhDs than among PhDs in other areas. For example, limiting the data to only non-Hispanic whites eliminates the difference in parental educational attainment between philosophy and the other arts and humanities: 46% both of recent philosophy PhDs and of arts and humanities PhDs report fathers with higher degrees and 34% of both groups report mothers with higher degrees. (Among all non-Hispanic white PhD recipients, it's 41% and 31% respectively.)

    Unsurprisingly, parental education is much higher in general among PhD recipients than in the U.S. population overall: Approximately 12% of people over the age of 25 in the US have higher degrees (roughly similar for all age groups, including the age groups that would be expected of the parents of recent PhD recipients).

    In sum, the parents of PhD recipients in philosophy tend to have somewhat higher educational attainment than PhD recipients overall and slightly higher educational attainment that PhD recipients in the other arts and humanities. However, much of this difference may be explainable by the overrepresentation of non-Hispanic whites within philosophy, rather than by a field-specific factor.


    Conclusion.

    Although PhD recipients in general tend to come from more educationally privileged backgrounds than do people who do not earn PhDs, philosophy PhD recipients do not appear to come from especially elite academic backgrounds, compared to their peers in other departments, despite our field's penchant for highbrow examples.

    -----------------------------------------

    ETA: Raw data here.

    ETA2: On my public Facebook link to this post, Wesley Buckwalter has emphasized that not all philosophy PhDs become professors. Of course that is true, though it looks like a majority of philosophy PhDs do attain permanent academic posts within five years of completion (see here). If it were the case that people with community college credit or with lower levels of parental education were substantially less likely than others to become professors even after completing the PhD, then that would undermine the inference from these data about PhD recipients to conclusions about philosophy professors in general.

    Monday, September 25, 2017

    How to Build an Immaterial Computer

    I'm working on a paper, "Kant Meets Cyberpunk", in which I'll argue that if we are living in a simulation -- that is, if we are conscious AIs living in an artificial computational environment -- then there's no particularly good reason to think that the computer that is running our simulation is a material computer. It might, for example, be an immaterial Cartesian soul. (I do think it has to be a concrete, temporally existing object, capable of state transitions, rather than a purely abstract entity.)

    Since we normally think of computers as material objects, it might seem odd to suppose that a computer could be composed from immaterial soul-stuff. However, the well-known philosopher and theorist of computation Hilary Putnam has remarked that there's nothing in the theory of computation that requires that computers be made of material substances (1965/1975, p. 435-436). To support this idea, I want to construct an example of an immaterial computer -- which might be fun or useful even independently of my project concerning Kant and the simulation argument.

    --------------------------

    Standard computational theory goes back to Alan Turing (1936). One of its most famous results is this: Any problem that can be solved purely algorithmically can in principle be solved by a very simple system. Turing imagined a strip of tape, of unlimited length in at least one direction, with a read-write head that can move back and forth along the tape, reading alphanumeric characters written on that tape and then erasing them and writing new characters according to simple if-then rules. In principle, one could construct a computer along these lines -- a "Turing machine" -- that, given enough time, has the same ability to solve computational problems as the most powerful supercomputer we can imagine.

    Now, can we build a Turing machine, or a Turing machine equivalent, out of something immaterial?

    For concreteness, let's consider a Cartesian soul [note 1]: It is capable of thought and conscious experience. It exists in time, and it has causal powers. However, it does not have spatial properties like extension or position. To give it full power, let's assume it has perfect memory. This need not be a human soul. Let's call it Angel.

    A proper Turing machine requires the following:

  • a finite, non-empty set of possible states of the machine, including a specified starting state and one or more specified halting states;
  • a finite, non-empty set of symbols, including a specified blank symbol;
  • the capacity to move a read/write head "right" and "left" along a tape inscribed with those symbols, reading a symbol inscribed at whatever position the head occupies; and
  • a finite transition function that specifies, given the machine's current state and the symbol currently beneath its read/write head, a new state to be entered and a replacement symbol to be written in that position, plus an instruction to then move the head either right or left.
  • A Cartesian soul ought to be capable of having multiple states. We might suppose that Angel has moods, such as bliss. Perhaps he can be in any one of several discrete states along an interval from sad to happy. Angel’s initial state might be the most extreme sadness and Angel might halt only at the most extreme happiness.

    Although we normally think of an alphabet of symbols as an alphabet of written symbols, symbols might also be imagined. Angel might imagine a number of discrete pitches from the A three octaves below middle C to the A three octaves above middle C. Middle C might be the blank symbol.

    Instead of physical tape, Angel thinks of integer numbers. Instead of having a read-write head that moves right and left in space, Angel thinks of adding or subtracting one from a running total. We can populate the "tape" with symbols using Angel's perfect memory: Angel associates 0 with one pitch, +1 with another pitch, +2 with another pitch, and so forth, for a finite number of specified associations. All unspecified associations are assumed to be middle C. Instead of a read-write head starting at a spatial location on a tape, Angel starts by thinking of 0, and recalling the pitch that 0 is associated with. Instead of the read-write head moving right to read the next spatially adjacent symbol on the tape, Angel adds one to his running total and thinks of the pitch that is associated with the updated running total. Instead of moving left, he subtracts one. Thus, Angel's "tape" is a set of memory associations like that in the figure below, where at some point specific associations run out and Middle C is assumed on to infinity.

    The transition function can be understood as a set of rules of this form: If Angel is in such and such a state (e.g., 23% happy) and is "reading" such and such a note (e.g., B2), then Angel should "write" such-and-such a note (e.g, G4), enter such-and-such a new state (e.g., 52% happy), and either add or subtract one from his running count. We rely on Angel's memory to implement the writing and reading: To "write" G4 when his running count is +2 is to commit to memory the idea that next time the running count is +2 he will "read" – that is, actively recall – the symbol G4 (instead of the B2 he previously associated with +2).

    As far as I can tell, Angel is a perfectly fine Turing machine equivalent. If standard computational theory is correct, he could execute any computational task that any ordinary material computer could execute. And he has no properties incompatible with being an immaterial Cartesian soul as such souls are ordinarily conceived.

    --------------------------

    [Note 1] I attribute moods and imaginings to this soul, which Descartes believes arise from the interaction of soul and body. On my understanding of Descartes, such things are possible in souls without bodies, but if necessary we could change to more purely intellectual examples. I am also bracketing Descartes' view that the soul is not a "machine", which appears to depend on commitment to a view of machines as necessarily material entities (Discourse, part 5). --------------------------

    Related:

    Kant Meets Cyberpunk (blogpost version, Jan 19, 2012)

    The Turing Machines of Babel (short story in Apex Magazine, July 2017)

    Tuesday, September 19, 2017

    New Paper in Draft: The Insularity of Anglophone Philosophy: Quantitative Analyses

    by Eric Schwitzgebel, Linus Ta-Lun Huang, Andrew Higgins, and Ivan Gonzales-Cabrera

    Abstract:

    We present evidence that mainstream Anglophone philosophy is insular in the sense that participants in this academic tradition tend mostly to cite or interact with other participants in this academic tradition, while having little academic interaction with philosophers writing in other languages. Among our evidence: In a sample of articles from elite Anglophone philosophy journals, 97% of citations are citations of work originally written in English; 96% of members of editorial boards of elite Anglophone philosophy journals are housed in majority-Anglophone countries; and only one of the 100 most-cited recent authors in the Stanford Encyclopedia of Philosophy spent most of his career in non-Anglophone countries writing primarily in a language other than English. In contrast, philosophy articles published in elite Chinese-language and Spanish-language journals cite from a range of linguistic traditions, as do non-English-language articles in a convenience sample of established European-language journals. We also find evidence that work in English has more influence on work in other languages than vice versa and that when non-Anglophone philosophers cite recent work outside of their own linguistic tradition it tends to be work in English.

    Full version here.

    Comments and criticisms welcome, either by email to my academic address or as comments on this post. By the way, I'm traveling (currently in Paris, heading to Atlanta tomorrow), so replies and comments approvals might be a bit slower than usual.

    Thursday, September 14, 2017

    What would it take for humanity to survive? (And does it matter if we do?) (guest post by Henry Shevlin)

    guest post by Henry Shevlin

    The Doctor: You lot, you spend all your time thinking about dying, like you're gonna get killed by eggs, or beef, or global warming, or asteroids. But you never take time to imagine the impossible. Like maybe you survive. (Doctor Who, “The End of the World”)

    It’s tempting to think that humanity is doomed: environmental catastrophe, nuclear war, and pandemics all seem capable of wiping us out, and that’s without imagining all of the exciting new technologies that might be lying in wait across the horizon waiting to devour us. However, I’m an optimist. I think there’s an excellent chance humanity will see this century out. And if we eventually become a multi-planetary species, the odds start looking really quite good for us. Nonetheless, in thinking about the potential value in human survival (or the potential loss from human extinction), I think we could do more first to pin down whether (and why) we should care about our survival, and exactly what would be required for us to survive.

    For many hardnosed people, I imagine there’s an obvious answer to both questions: there is no special value in human survival, and in fact, the universe may be a better place for everyone (including perhaps us) if we were to all quietly go extinct. This is a position I’ve heard from ecologists and antinatalists, and while I won’t debate it here, I find it deeply unpersuasive. As far as we know, humanity is the only truly intelligent species in the universe – the only species that is capable of great works of art, philosophy, and technological development. And while we may not be the only conscious species on earth, we are likely the only species capable of the more rarefied forms of happiness and value. Further to that, even though there are surely other conscious species on earth worth caring about, our sun will finish them off in a few billion years, and they’re not getting off this planet without our help (in other words: no dogs on Mars unless we put them there).

    However, even if you’re sympathetic to this line of response, it admittedly doesn’t show there’s any value in specifically human survival. Even if we grant that humans are an important source of utility worth protecting, surely there are intelligent aliens somewhere out there in the cosmos capable of enjoying just as fancy pleasures as those we experience. Insofar as we’re concerned with human survival at all, then, maybe it should just be in virtue of our more general high capacity for well-being?

    Again, I’m not particularly convinced by this. Leaving aside the fact that we may be alone in the universe, I can’t shake the deep intuition that there’s some special value in the thriving of humanity, even if only for us. To illustrate the point, imagine that one day a group of tiny aliens show up in orbit and politely ask if they can terraform earth to be more amenable to them, specifically replacing our atmosphere with one composed of sulphur dioxide. The downside of this will be that humanity and all of the life on Earth will die out. On the upside, however, the aliens’ tiny size means that Earth could sustain trillions of them. “You’re rational ethical beings,” they say. “Surely, you can appreciate that it’s a better use of resources to give us your planet? Think of all the utility we’d generate! And if you’re really worried, we can keep a few organisms from every species alive in one of our alien zoos.”

    Maybe I’m parochial and selfish, but the idea that we should go along with the aliens’ wishes seems absurd to me (well, maybe they can have Mars). One of my deepest moral intuitions is that there is some special good that we are rationally allowed – if not obliged – to pursue in ensuring the continuation and thriving of humanity.

    Let’s just say you agree with me. We now face a further question: what would it take for humanity to survive in this ethically relevant sense? It’s a surprisingly hard question to answer. One simple option would be that we survive as long as the species Homo sapiens is still kicking around. Without getting too deeply into the semantics of “humanity”, it seems like this misses the morally interesting dimensions of survival. For example, imagine that in the medium term future, beneficial gene-modding becomes ubiquitous, to the point where all our descendants would be reproductively cut off from breeding with the likes of us. While that would mean the end of Homo sapiens (at least by standard definitions of species), it wouldn’t, to my mind, mean the end of humanity in the broader and more ethically meaningful sense.

    A trickier scenario would involve the idea that one day we may cease to be biological organisms, having all uploaded ourselves to computers or robot bodies. Could humanity still exist in this scenario? My intuition is that we might well survive this. Imagine a civilization of robots who counted biological humans among their ancestors, and went around quoting Shakespeare to each other, discussing the causes of the Napoleonic Wars, and debating whether the great television epic Game of Thrones was a satisfactory adaptation of the books. In that scenario, I feel that humanity in the broader sense could well be thriving, even if we no longer have biological bodies.

    This leads me to a final possibility: maybe what’s ethically relevant in our survival is really the survival of our culture and values: that what matters is really that beings relevantly like us are partaking in the artistic and cultural fruits of our civilization.

    While I’m tempted by this view, I think it’s just a little bit too liberal. Imagine we wipe ourselves out next year in a war involving devastating bioweapons, and then a few centuries later, a group of aliens show up on Earth to find that nobody’s home. Though they’re disappointed that there are no living humans, they are delighted by the cultural treasure trove of they’ve found. Soon, alien scholars are quoting Shakespeare and George R.R. Martin and figuring out how to cook pasta al dente. Earth becomes to the aliens what Pompeii is to us: a fantastic tourist destination, a cultural theme park.

    In that scenario, my gut says we still lose. Even though there are beings that are (let’s assume) relevantly like us that are enjoying our culture, humanity did not survive in the ethically relevant sense.

    So what’s missing? What is it that’s preserved in the robot descendant scenario that’s missing in the alien tourist one? My only answer is that some kind of appropriate causal continuity must be what makes the difference. Perhaps it’s that we choose, through a series of voluntary, purposive actions to bring about the robot scenario, whereas the alien theme park is a mere accident. Or perhaps it’s the fact that I’m assuming there’s a gradual transition from us to the robots, rather than the eschatological lacuna of the theme park case.

    I have some more thought experiments that might help us decide between these alternatives, but that would be taking us beyond the scope of a blogpost. And perhaps my intuitions that got us this far are already radically at odds with yours. But in any case, as we take our steps into the next stage of human development, I think it’s important for us to figure out what it is about us (if anything) that makes humanity valuable.

    [image source]

    Tuesday, September 12, 2017

    Writing for the 10%

    [The following is adapted from my advice to aspiring writers of philosophical fiction at the Philosophy Through Fiction workshop at Oxford Brookes last June.]

    I have a new science fiction story out this month in Clarkesworld. I'm delighted! Clarkesworld is one of my favorite magazines and a terrific location for thoughtful speculative fiction.

    However, I doubt that you'll like my story. I don't say this out of modesty or because I think this story is especially unlikable. I say it partly to help defuse expectations: Please feel free not to like my story! I won't be offended. But I say it too, in this context, because I think it's important for writers to remind themselves regularly of one possibly somewhat disappointing fact: Most people don't like most fiction. So most people are probably not going to like your fiction -- no matter how wonderful it is.

    In fiction, so much depends on taste. Even the very best, most famous fiction in the world is disliked by most people. I can't stand Ernest Hemingway or George Eliot. I don't dispute that they were great writers -- just not my taste, and there's nothing wrong with that. Similarly, most people don't like most poetry, no matter how famous or awesome it is. And most people don't like most music, when it's not in a style that suits them.

    A few stories do appear to be enjoyed by almost everyone who reads them ("Flowers for Algernon"? "The Paper Menagerie"?), but those are peak stories of great writers' careers. To expect even a very good story by an excellent writer to achieve almost universal likability is like hearing that a philosopher has just put out a new book and then expecting it to be as beloved and influential as Naming and Necessity.

    Even if someone likes your expository philosophy, they probably won't like your fiction. The two types of writing are so different! Even someone who enjoys philosophically-inspired fiction probably won't like your fiction in particular. Too many other parameters of taste also need to align. They'll find your prose style too flowery or too dry, your characters too flat or too cartoonishly clever, your plot too predictable or too confusing, your philosophical elements too heavy-handed or too understated....

    I draw two lessons.

    First lesson: Although you probably want your friends, family, and colleagues to enjoy your work, and some secret inner part of you might expect them to enjoy it (because it's so wonderful!), it's best to suppress that desire and expectation. You need to learn to expect indifference without feeling disappointed. It's like expecting your friends and family and colleagues to like your favorite band. Almost none of them will -- even if some part of you screams out "of course everyone should love this song it's so great!" Aesthetic taste doesn't work like that. It's perfectly fine if almost no one you know you likes your writing. They shouldn't feel bad about that, and you shouldn't feel bad about that.

    Second lesson: Write for the people who will like it. Sometimes one hears the advice that you should "just write for yourself" and forget the potential audience. I can see how this might be good advice if the alternative is to try to please everyone, which will never succeed and might along the way destroy what is most distinctive about your voice and style. However, I don't think that advice is quite right, for most writers. If you really are just writing for yourself -- well, isn't that what diaries are for? If you're only writing for yourself you needn't think about comprehensibility, since of course you understand everything. If you're only writing for yourself, you needn't think about suspense, since of course you know what's going to happen. And so forth. The better advice here is write for the 10%. Maybe 10% of the people around you have tastes similar enough to your own that there's a chance that your story will please them. They are your target audience. Your story needn't be comprehensible to everyone, but it should be comprehensible to them. Your story needn't work intellectually and emotionally for everyone, but you should try to make it work intellectually and emotionally for them.

    When sending your story out for feedback, ignore the feedback of the 90%, and treasure the feedback of the 10%. Don't try to implement every change that everyone recommends, or even the majority of changes. Most people will never like the story that you would write. You wouldn't want your favorite punk band taking aesthetic advice from your country-music loving uncle. But listen intently to the 10%, to the readers who are almost there, the ones who have the potential to love your story but don't quite love it yet. They are the ones to listen to. Make it great for them, and forget everyone else.

    [Cross-posted at The Blog of the APA]

    Tuesday, September 05, 2017

    The Gamer's Dilemma (guest post by Henry Shevlin)

    guest post by Henry Shevlin

    As an avid gamer, I’m pleased to find that philosophers are increasingly engaging with the rich aesthetic and ethical issues presented by videogames, including questions about whether videogames can be a form of art and the moral complexities of virtual violence.

    One of the most disturbing ethical questions I’ve encountered in relation to videogames, though, is Morgan Luck’s so-called “Gamer’s Dilemma”. The puzzle it poses is roughly as follows. On the one hand, we don’t tend to regard people committing virtual murders as particularly ethically problematic: whether I’m leading a Mongol horde and slaughtering European peasants or assassinating clients as a killer for hire, it seems that, since no-one really gets hurt, my actions are not particularly morally troubling (there are exceptions to this of course). On the other hand, however, there are still some actions that I could perform in a videogame that we’re much less sanguine about: if we found out that a friend enjoyed playing games involving virtual child abuse or torture of animals, for example, we would doubtless judge them harshly for it.

    The gamer’s dilemma concerns how we can explain or rationalize this disparity in our responses. After all, the disparity doesn’t seem to track any actual harm – there’s no obvious harm done in either case – or even the quantity of simulated harm (nuclear war simulations in which players virtually incinerate billions don’t strike me as unusually repugnant, for example). And while it might be that some forms of simulated violence can lead to actual violence, this remains controversial, and again, it’s unlikely that any such causal connections between simulated harm and actual harm would appropriately track our different intuitions about the different kinds of potentially problematic actions we might take in video games.

    However, while the Gamer’s Dilemma is an interesting puzzle in itself, I think we can broaden the focus to include other artforms besides videogames. Many of us have passions for genres like murder mystery stories, serial killer movies, or apocalyptic novels, all of which involve extreme violence but fall well within the bounds of ordinary taste. However, someone who had a particular penchant for stories about incest, necrophilia, or animal abuse might strike us as, well, more than a little disturbed. Note that this is true even when we focus just on obsessive cases: someone with an obsession for serial killer movies might strike us as eccentric, but we’d probably be far more disturbed by someone whose entire library consisted of books about animal abuse.

    Call this the puzzle of disturbing aesthetic tastes. What makes it the case that some tastes are disturbing and others not, even when both involve fictional harm? Is our tendency to form negative moral judgments about those with disturbing tastes rationally justified? While I’m not entirely sure what to think about this case, I am inclined to think that disturbing aesthetic tastes might reasonably guide our moral judgment of a person insofar as they suggest that that person’s broader moral emotions may be, well, a little out of the ordinary. Most of us feel revulsion rather than fascination with even the fictional torture of animals, for example, and if someone doesn’t share this revulsion in fictional cases, it might provide evidence that they might be ethically deviant in other ways. Crucially, this doesn’t apply to depictions of things like fictional murder, since almost all of us have enjoyed a crime drama at some point in our lives, and it's well within the boundaries of normal taste.

    Note that there’s a parallel here with one possible response to Bernard William’s famous example of the truck driver who – through no fault of his own – kills a child who runs into the road, and subsequently feels no regret or remorse. As Williams points out, there’s no rational reason for the driver to feel regret – ex hypothesi, he did everything he could – yet we’d think poorly of him were he just to shrug the incident off (interestingly paralleled by the recent public outcry in the UK following a similar incident involving a unremorseful cyclist). I think what’s partly driving our intuition in such cases is the fact that a certain amount of irrational guilt and regret even for actions outside our control is to be expected as part of normal human moral psychology. When such regret is absent, it’s an indicator that a person is lacking at least some typical moral emotions. In much the same way, even if there is nothing intrinsically wrong about enjoying videogames or movies about animal torture, the fact that it constitutes a deviance from normal human moral attitudes might make us reasonably suspicious of such people’s broader moral emotions in such cases.

    I think this is a promising line to take in regards to both the gamer’s dilemma and the puzzle of disturbing tastes. One consequence of this, however, would be that as society’s norms and standards change, certain tastes may no longer come to be indicative of more general moral deviancy. For example, in a society with a long history of cannibal fiction, people in general might lack the same intense disgust reactions that we ourselves display despite their being in all respects morally upstanding. In such a society, then, the fact that someone was fascinated with cannibalism might not be a useful indicator as to their broader moral attitudes. I’m inclined to regard this as a reasonable rather than counterintuitive consequence of the view, reflecting the rich diversity in societal taboos and fascinations. Nonetheless, no matter what culture I was visiting, I doubt I’d trust anyone who enjoyed fictional animal torture with watching my dog for the weekend.

    [image source]

    Friday, September 01, 2017

    How Often Do European Language Journals Cite English-Language vs Same-Language Work?

    By Eric Schwitzgebel and Ivan Gonzalez-Cabrera

    Elite English-language philosophy journals cite almost exclusively English-language sources, while elite Chinese-language philosophy journals cite from a range of linguistic traditions.

    How about other European-language journals? To what extent do articles in languages like French, German, and Spanish cite works originally written in the same language vs. works originally written in other languages?

    To examine this question, we looked at a convenience sample of established journals that publish primarily or exclusively in European languages other than English -- journals catalogued in the Philosophy section of JStor with available records running at least from 1999 through 2010. [note 1] We downloaded the most recently available JStor archived issue of each of these journals and examined the references of every research article in those issues (excluding reviews, discussion notes, editors' introductions, etc.). This gave us a total of 96 articles to examine, 41 in French, 23 in German, 14 in Italian, 8 in Portuguese, 6 in Spanish, and 4 in Polish.

    Although this is not a systematic or proportionate sample of non-English European-language journal articles, we believe it is broad and representative enough to provide a preliminary test of our hypothesis. Are citation patterns in these journals broadly similar to the citation patterns of elite Anglophone journals (where 97% of citations are to same-language sources)? Or are they closer to the patterns of elite Chinese-language journals (51% of citations to same-language sources)?

    In all, we had 2883 citations for analysis. For each citation, we noted the language of the citing article, whether the cited source had originally been published in the same language as the citing article or in a different language, and if it was a different language whether that language was English. As in our previous studies, sources in translation were coded based on the original language of publication rather than the language into which it had been translated (e.g., a translation of Plato into German would be coded as ancient Greek rather than German). We also noted the original year of publication of the cited source, sorting into one of four categories: ancient to 1849, 1850 to 1945, 1946-1999, or 2000-present. [note 2]

    In our sample, 44% of citations (1270/2883) were to same-language sources, 30% were to sources originally published in English (some translated into the language of the citing article), and 26% (749/2883) were to all other languages combined. These results are much closer to the Chinese-language pattern of drawing broadly from a variety of language traditions than they are to the English-language pattern of citing almost exclusively from the same linguistic tradition.

    French- and German-language articles showed more same-language citation than did articles in other languages (51% and 71% respectively, compared to an average of 20% for the other sampled languages), but we interpret this result cautiously due to the small and possibly unrepresentative samples of articles in each language.

    Breaking the results down by year category, we found the following: [if blurry, click for clearer display]

    Thus, in this sample, cited sources originally published between 1946 and 1999 were just about as likely to have been originally written in English as to have been written in the language of the citing article. When the cited source was published before 1946 or after 1999, it was less likely to be in English.

    Looking article by article, we found that only 5% of articles (5/96) cited exclusively same-language sources. This contrasts sharply with our study of articles in Anglophone journals, 73% of which cited exclusively English-language sources.

    We conclude that non-English European-language philosophy articles cite work from a broad range of linguistic traditions, unlike articles in elite Anglophone philosophy journals, which cite almost exclusively from English-language sources.

    One weakness of this research design is the unsystematic sampling of journals and languages. Therefore, we hope to follow up with at least one more study, focused on a more carefully chosen set of journals from a single European language. Stay tuned!

    ----------------------------------------------

    note 1: Included journals were Archives de Philosophie, Archiv für Rechts- und Sozialphilosophie, Crítica: Revista Hispanoamericana de Filosofía, Gregorianum, Jahrbuch für Recht und Ethik, Les Études Philosophiques, Revista Portuguesa de Filosofia, Revue de Métaphysique et de Morale, Revue de Philosophie Ancienne, Revue Internationale de Philosophie, Revue Philosophique de la France et de l'Étranger, Rivista di Filosofia Neo-Scolastica, Rivista di Storia della Filosofia, Roczniki Filozoficzne, Rue Descartes, Sartre Studies International, Studi Kantiani, and Studia Leibnitiana. We excluded journals for which substantially more than half of recent articles were in English, as well as journals not listed as philosophy journals on the PhilPapers journals list.

    note 2: Coding was done by two expert coders, each with a PhD in philosophy. One coder was fluent only in English but had some reading knowledge of German, French, and Spanish. The other coder was fluent in Spanish and English, had excellent reading knowledge of German and Portuguese, and had some reading knowledge of French and Italian. The coding task was somewhat difficult, especially for journals using footnote format. Expertise was required to recognize, for example, the original language and publication period of translated works, which was not always immediately evident from the citation information. We randomly selected 10 articles to code for inter-rater reliability, and in 91% of cases (235 of 258 citations) the coders agreed on both the original language and the year-category of original publication. Errors involved missing or double-counting some footnoted citations, typographical error, or mistakes in language or year category. Errors did not fall into any notable pattern, and in our view are within an acceptable rate given the difficulty of the coding task and the nature of our hypothesis.

    Monday, August 28, 2017

    How Often Do Chinese Philosophy Journals Cite English-Language Work?

    By Linus Huang and Eric Schwitzgebel

    In a sample of elite Anglophone philosophy journals, only 3% of citations are to works that were originally written in a language other than English. Are philosophy journals in other languages similar? Do they mostly cite sources from their own linguistic tradition? Or do they cite more broadly?

    We will examine this question by looking at citation patterns from several non-English languages. Today we start by examining a sample of 208 articles published in fifteen elite Chinese-language journals from 1996 to 2016. [See Note 1 for methodological details.]

    In our sample of 208 Chinese-language articles, 49% (1422/2929) of citations are to works originally written in languages other than the language of the citing article, in stark contrast with our results for Anglophone philosophy journals.

    English is the most frequently cited foreign language, constituting 31% (915/2929) of all citations (compared to 17% for all other languages combined). Other cited languages are German, French, Russian, Japanese, Latin, Greek, Korean, Sanskrit, Spanish, Italian, Polish, Dutch, and Tibetan.

    Our sample of elite Anglophone journals contained no journals focused on the history of philosophy. In contrast, our sample of elite Chinese-language journals contains three that focus on the history of Chinese philosophy. Excluding the Chinese-history journals from the analysis, we found that the plurality of citations (44%, 907/2047) are to works originally written in English (often in Chinese translation for the older works). Only 32% (647/2047) of citations are to works originally written in Chinese (leaving 24% for all other languages combined).

    Looking just at the journals specializing in history of Chinese philosophy, 98% (860/882) of citations are to works originally written in Chinese – a percentage comparable to the percentage of same-language citations in the non-historical elite Anglophone journals in our earlier analysis. Chinese journals specificially discussing Chinese history cite Chinese sources at about the same rate as Anglophone journals cite Anglophone sources when discussing general philosophy.

    We were not able to determine original publication date of all of the cited works. However, we thought it worth seeing whether the English-language citations are mostly of classic historical philosophers like Locke, Hume, and Mill, or whether instead they are mostly of contemporary writers. Thus, we randomly sampled 100 of the English-language citations. Of the 100, 68 (68%) of the cited English-language works were published in the period from 1946-1999 and 19 (19%) were published from 2000 to the present.

    Finally, we broke the results down by year of publication of the citing article (excluding the three history journals). This graph shows the results.

    Point-biserial correlation analysis shows a significant increase in rates of citation of English-language sources from 1996 to 2016 (34% to 49%, r = .11, p < .001). Citation of both Chinese and other-language sources may also be decreasing (r = -.05, p = .03; r = -.08, p = .001), but we would interpret these trends cautiously due to the apparent U-shape of the curves and the possibility of article-level effects that would compromise the statistical independence of the trials.

    Citation patterns in elite Chinese-language philosophy journals thus appear to be very different from citation patterns in elite Anglophone philosophy journals. The Anglophone journals cite almost exclusively works that were originally written in English. The Chinese journals cite about half Chinese sources and about half foreign language sources (mostly European languages), with English being the dominant language in the foreign language group, and increasingly so in recent years.

    We leave for later discussion the question of causes, as well as normative questions such as to what extent elite journals in various languages should be citing mostly from the same language tradition versus to what extent they should aim instead to cite more broadly from work written in a range of languages.

    Stay tuned for some similar analyses of journals in other languages!

    ------------------------------------------

    Note 1: The journals are: 臺灣大學哲學論評 (National Taiwan University Philosophical Review), 政治大學哲學學報 (NCCU Philosophical Journal), and 東吳哲學學報 (Soochow Journal of Philosophical Studies), which are ranked as the Tier I philosophy journals by Research Institute for the Humanities and Social Sciences Ministry of Science and Technology, Taiwan; and 哲学研究(Philosophical Researches), 哲学动态 (Philosophical Trends), 自然辩证法研究 (Studies in Dialectics of Nature), 道德与文明 (Morality and Civilization), 世界哲学 (World Philosophy), 自然辩证法通讯 (Journal of Dialectics of Nature), 伦理学研究 (Studies in Ethics), 现代哲学 (Modern Philosophy), 周易研究 (Studies of Zhouyi), 孔子研究 (Confucius Studies), 中国哲学史 (History of Chinese Philosophy), 科学技术哲学研究 (Studies in Philosophy of Science and Technology), which are ranked as the core philosophy journals in the Chinese Social Sciences Citation Index by Institute for Chinese Social Sciences Research and Assessment, Nanjing University, China. We sampled the research articles of their first issues in 1996, 2001, 2006, 2011, and 2016, generating a list of 208 articles. A coder fluent in both Chinese and English and with a PhD in philosophy (Linus Huang) coded the references of these articles, generating a list of 2952 citations to examine. For each reference, we noted its original publication language. Translated works were coded based on original language in which it was written rather than the language into which it had been translated. If that information was not available in the reference, Linus hand-coded by searching online or based on his knowledge of the history of philosophy. The original language was determinable in 2929 of the 2952 citations.

    Thursday, August 24, 2017

    Am I a Type or a Token? (guest post by Henry Shevlin)

    guest post by Henry Shevlin

    Eric has previously argued that almost any answer to the problem of consciousness involves “crazyism” – that is, a commitment to one or another hypotheses that might reasonably be considered bizarre. So it’s in this spirit of openness to wild ideas I’d like to throw out one of my own longstanding “crazy” ideas concerning our identity as conscious subjects.

    To set the scene, imagine that we have one hundred supercomputers, each separately running a conscious simulation of the same human life. We’re also going to assume that these simulations are all causally coupled together so that they’re in identical functional states at any one time – if a particular mental state type is being realized in one at a given time, it’s also being realized in all the others.

    The question I want to ask now is: how many conscious subjects – subjective points of view – exist in this setup? A natural response is “one hundred, obviously!” After all, there are one hundred computers all running their own simulations. But the alternate crazy hypothesis I’d like to suggest is that there’s just one subject in this scenario. Specifically, I want to claim that insofar as two physical realizations of consciousness give rise to a qualitatively identical sequence of experiences, they give rise to a single numerically identical subject of experience.

    Call this hypothesis the Identity of Phenomenal Duplicates, or IPD for short. Why would anyone think such a crazy thing? In short, I’m attracted by the idea that the only factors relevant to the identity and individuation of a conscious subject are subjective: crudely, what makes me me is just the way the world seems to me and my conscious reactions to it. As a subject of phenomenal experience, in other words, my numerical identity is fixed just by those factors that are part of my experience, and factors that lie outside my phenomenal awareness (for example, which of many possible computers are running the simulation that underpins my consciousness) are thus irrelevant to my identity.

    Putting things another way, I’d suggest that maybe my identity qua conscious subject is more like a type than a token, meaning that a single conscious subject could be multiply instantiated. As a helpful analogy, think about the ontology of something like a song, a novel, or a movie. The Empire Strikes Back has been screened billions of times over the years, but all of these were instantiations of one individual thing, namely the movie itself. If the IPD thesis is correct, then the same might be true for a conscious subject – that I myself (not merely duplicates of me!) could be multiply instantiated across a host of different biological or artificial bodies, even at a single moment. What *I* am, then, on this view, is a kind of subjective pattern or concatenation of such patterns, rather than a single spatiotemporally located object.

    Here’s an example that might make the view seem (marginally!) more plausible. Thinking back to the one hundred simulations scenario above, imagine that we pick one simulation at random to be hooked up to a robot body, so that it can send motor outputs to the body and receive its sensory inputs. (Note that because we’re keeping all the simulations coupled together, they’ll remain in ‘phenomenal sync’ with whichever sim we choose to be embodied as a robot). The robot wakes up, looks around, and is fascinated to learn it’s suddenly in the real world, having previously spent its life in a simulation. But now it asks us: which of the Sims am I? Am I the Sim running on the mainframe in Tokyo, or the one in London, or the one in Sao Paolo?

    One natural response would be that it was identical to whichever Sim we uploaded the relevant data from. But I think this neglects the fact that all one hundred Sims are causally coupled with one another, so in a sense, we uploaded the data from all of them – we just used one specific access point to get to it. To illustrate this, note that in transferring the relevant information from our Sims to the robot, we might wish (perhaps for reasons of efficiency) to grab the data from all over the place – there’s no reason we’d have to confine ourselves to copying the data over from just one Sim. So here’s an alternate hypothesis: the robot was identical to all of them, because they were all identical to one another – there was just one conscious subject all along! (Readers familiar with Dennett’s Where Am I? may see clear parallels here.)

    I find something very intuitive about the response IPD provides in this case. I realize, though, that what I’ve provided here isn’t much of an argument, and invites a slew of questions and objections. For example, even if you’re sympathetic to the reading of the example above, I haven’t established the stronger claim of IPD, which makes no reference to causal coupling. This leaves it open to say, for example, that had the simulations been qualitatively identical by coincidence (for example, via being a cluster of Boltzmann brains) rather than being causally coupled, their subjects wouldn’t have been numerically identical. We might also wonder about beings whose lives are subjectively identical up to a particular point in time, and afterwards diverge. Are they the same conscious subject up until the point of divergence, or were they distinct all along? Finally, there’s also some tricky issues concerning what it means for me to survive in this framework – if I’m a phenomenal type rather than a particular token instantiation of that type, it might seem like I could still exist in some sense even if all my token instances were destroyed (although would Star Wars still exist in some relevant sense if every copy of it was lost?).

    Setting aside these worries for now, I’d like to quickly explore how the truth or falsity of IPD might actually matter – in fact, might matter a great deal! Consider a scenario in which some future utilitarian society decides that the best way to maximize happiness in the universe is by running a bunch of simulations of perfectly happy lives. Further, let’s imagine that their strategy for doing this involves simulating the same single exquisitely happy life a billion times over.

    If IPD is right, then they’ve just made a terrible mistake: rather than creating a billion happy conscious subjects, they’ve just made one exquisitely happy subject with a billion (hedonically redundant) instantiations! To rectify this situation, however, all they’d need to do would be to introduce an element of variation into their Sims – some small phenomenal or psychological difference that meant that each of the billion simulations was subjectively unique. If IPD is right, this simple change would increase the happiness in the universe a billion-fold.

    There are other potential interesting applications of IPD. For example, coupled with a multiverse theory, it might have the consequence that you currently inhabit multiple distinct worlds, namely all those in which there exist entities that realize subjectively and psychologically identical mental states. Similarly, it might mean that you straddle multiple non-continuous areas of space and time: if the same identical simulation is run at time t1 and time t2 a billion years apart, then IPD would suggest that a single subject cohabits both instantiations.

    Anyway, while I doubt I’ve convinced anyone (yet!) of this particular crazyism of mine, I hope at least it might provide the basis for some interesting metaphysical arguments and speculations.

    [Image credit: Paolo Tonon]