Version classiqueVersion mobile

Knowledge and the Norm of Assertion

 | 
John Turri

4. Prospects and Horizons

Texte intégral

1Over the past decade, the development of the literature on norms of assertion has been extraordinary. It has been one of the most lively and fruitful areas of philosophical inquiry during this time. And, as should be clear from the preceding chapters, it has paid enormous dividends. We now know that there is a deep normative connection between knowledge and assertion. The best way to understand this connection is that knowledge is the norm of assertion. That one simple idea packs immense explanatory punch. The amount and variety of evidence that it explains is astounding and, as far as I am aware, unprecedented in philosophy’s recent history. It is a hard-won discovery that illustrates philosophy’s value, exemplifies genuine philosophical progress, and is something the discipline can be proud of.

2The knowledge account reveals something deep and important about an absolutely central aspect of our lives as social beings. It identifies the core evaluative principle of our information-sharing practices and, in the process, illuminates a central plank in normative social cognition. I consider it to be one of the most significant contributions that contemporary philosophy has made to our understanding of the human condition — though I acknowledge, of course, the possibility of reasonable disagreement on this last point. But however the grand bookkeeping works out, further progress in this area will come not by asking whether knowledge is the norm of assertion. That would be pointless because we already know that it is. Even worse would be to continue the misguided hunt for alleged counterexamples to the knowledge account. Instead, further work should seek, on the one hand, to sharpen and extend our understanding of the normative relationship between knowledge and assertion and, on the other, what clues this might give us to other important questions.

What “Should”?

3Many researchers working on the norm of assertion have accepted, often implicitly, several assumptions. They assume that there is a unique norm of assertion. They assume that the norm is rule-like because it says that you should make an assertion only under certain conditions, as opposed to, say, stipulating when one assertion is better than another. They assume that the rule is deontological because the “should” expresses the concept of permission; you have permission or authority to assert only under certain conditions, and to do otherwise is impermissible. They assume that the norm imposes a perfect requirement or standard, one that applies strictly to each and every assertion. They assume that the norm is concurrently restrictive because the condition must be satisfied prior to or concurrent with the assertion, rather than afterward. They assume that the norm is discretionary because it leaves it to your discretion whether to exercise your authority to assert; it does not obligate you to make any assertion. They assume that the norm is constitutive because it constitutes, organizes, or sustains the social practice of assertion, similar to how the rules of a government are essential to making it the sort of government it is, or how the rules of chivalry were essential to making it the cultural practice it was. Finally, they assume that the norm is also individuating because it distinguishes assertion from related speech acts such as guaranteeing or guessing, which are governed by different norms.

4We could propose a theory of the norm of assertion that rejected any or all of these assumptions. We might conjecture that assertion has no norm or multiple norms. We might conjecture that the norm does not say when an assertion should be made but, rather, only when one assertion is better than another. We might conjecture that the “should” expresses not the concept of permission but, rather, goodness; on this view, asserting what you do not know is not impermissible but bad. We might conjecture that the norm imposes only an imperfect requirement; for example, perhaps it requires only that most of your assertions express knowledge, or that the central tendency in your overall pattern of assertions is the expression of knowledge. We might conjecture that the norm only requires you to do certain things after your assertion has been challenged rather than meeting some standard at the time you assert. We might conjecture that the norm obligates you to make assertions under certain conditions. We might conjecture that the norm does not help constitute the practice of assertion but instead is a norm of morality or prudence. Finally, we might conjecture that the norm does not individuate assertion but instead is common to many speech acts.

5The evidence discussed in this book unquestionably demonstrates a deep normative relationship between assertion and knowledge. But it does not necessitate a unique interpretation of that relationship. In other words, although it is clear that an assertion should express knowledge, we do not yet fully understand this “should.” It would be counterproductive and misleading to claim false precision on the issue at this point. For instance, nothing in the evidence requires that knowledge is the unique norm of assertion or that the norm is purely discretionary. Nevertheless, the available evidence does help us begin narrowing things down. It does so in at least five ways.

6First, it supports the assumption that the knowledge norm is rule-like. People responded consistently to questions about what “should” or “should not” be asserted, which is rule-like. Second, it supports the assumption that “should” expresses a status more like permission than goodness. People react to reasonable false assertions by engaging in excuse validation and, as far as we know, excuse validation seems to occur when someone does something forbidden. Nevertheless, I acknowledge that further investigation could reveal greater complexity on this point (more on this in this chapter’s next section). Third, it supports the assumption that the norm imposes a perfect requirement. In response to the challenge, “You don’t know that,” it would be silly to respond, “Maybe not, but that’s no problem because I know most of the things I say.” But if the norm were imperfect, then such a response arguably should sound fine. Consider the paradigm case of an imperfect duty: charitable giving. If someone challenges you, “You didn’t give to charity this week,” it would be perfect sensible to reply, “Maybe not, but that’s no problem because I give most weeks.” Nevertheless, again, I do not think that this conclusively settles the issue. Further investigation could reveal evidence best explained by a stringent but still imperfect norm.

7Fourth, the evidence suggests that the norm is concurrently restrictive. Many of the experimental studies asked people to evaluate assertions prospectively. A situation was described in which an agent had certain evidence and was asked a certain question. The agent had not made an assertion yet and, obviously, no one had challenged her assertion. Nevertheless, people’s assertability judgments were powerfully influenced by the presence of both truth and knowledge. Fifth, the sheer amount and variety of evidence suggests that knowledge is uniquely normatively connected to assertion. I have been unable to observe, either in my own behavior or in others’, similar connections between knowledge and any other speech act. But perhaps further investigation will prove me wrong on this point.

8What of the assumption that the knowledge rule helps to constitute the practice of assertion? There are only so many candidates for the sort of normativity at issue. Breaking the knowledge rule need not, and typically will not, be either immoral, imprudent, irrational, impolite, or illegal. And, as reviewed in Chapter 3, people’s judgments about assertability do not reduce to an evaluation of an assertion’s morality, rationality, etiquette, or legality — the “should” in “should assert” transcends those sources of normativity. The only familiar sort of normativity that seems to fit is constitutive. The assumption also coheres with what seems obvious in the following thought experiment. Imagine a community that speaks a language very similar to our own, except that knowledge-talk is completely absent from the give and take surrounding their declarative utterances. They do not prompt utterances by asking, “Do you know what time it is?” Someone who says, “Pickett’s Charge was a serious error,” stares in puzzlement upon being asked, “How do you know that?” Questioners are boggled upon hearing “I don’t know” in response to their question. Their evaluations of other people’s declarative utterances are insensitive to whether the proposition asserted is known or even whether it is true. Upon considering this community, it seems clear to me that they have a completely different information-sharing practice from ours. Assimilating into this community would require unlearning our practice of assertion and learning an entirely new practice. I would feel like a foreigner in their midst. Conversely, teaching them our practice would involve, among other things, sensitizing them to the relevance of knowledge.

9In this chapter’s final section, I give greater substance to the claim of constitutive normativity, in terms of the evolution and maintenance of communication systems.

Good Enough?

10In the previous section, I said that the available evidence supports the assumption that “should” expresses a status more like permission than goodness. But I also acknowledged that further investigation could reveal greater complexity on this point and, furthermore, that nothing in the evidence requires that knowledge is the unique norm of assertion. At the risk of diluting my basic message, I would now like to consider a slightly more complicated view of assertional norms. I am not convinced that this more complicated view is correct, but I discuss it briefly in the spirit of exploration.

11You enter some shabby government office to take care of some annoying paperwork for some irritating responsibility. The room is packed. A sign yellowing with age greets you as you enter, “Please take a number and we will be happy to assist you shortly.” You take a number, 117, note with disgust that they are presently serving number 9, sit down in one of the cheap, small plastic chairs crammed along the wall, and patiently wait your turn. Hours slowly pass. Finally, your turn approaches. “Now serving number 114,” the electronic display reads. You welcome the thought of soon being done with this unpleasant and aggravating experience. But you are also somewhat concerned about the older woman sitting across from you. As difficult as this long wait in cramped, uncomfortable quarters has been for you, it has clearly been more difficult for her. It is essential that she stay and resolve an urgent matter regarding her pension, she tells you, but her aching hip is not making it easy. She continues in earnest, “I see that you have number 117. I have number 122. Would you mind trading numbers with me, please? I could manage either way, of course, but extending me this favor would be a relief.” Granting her request would require you to wait in line about an extra ten minutes.

12It would be good for you to do her this favor. You should switch numbers. We might praise you for switching numbers. If you did not switch numbers, we might think less of you; we might offer some gentle criticism after the fact; and arguably, you should later regret your decision. All this despite our recognizing that you have a right to your number, that you would be within your rights to refuse her request, and thus that your refusing to switch is morally permissible. In a word, refusing to do the favor in this case is bad but permissible. (For those who do not share the intuition about the case as described, please adjust the case by increasing the older woman’s ticket number just enough until you feel it would no longer be wrong for you to refuse.)

13A bad but permissible action is sometimes called “suberogatory” (Driver 1992) or an “offence” (Chisholm 1963). This is the inverse of the supererogatory. A supererogatory act is good but not required. The supererogatory is an important normative category that helps us understand our moral judgments about actions in the interval spanning the required and the heroic. Similarly the suberogatory helps us, as one philosopher eloquently put it, to shed light on our normative judgments “lying in the dark corners between right and wrong” (Driver 1992: 295).

14If there are suberogatory actions, then why not suberogatory assertions too? If the latter are real, then perhaps belief, or justified belief, sets the standard for a permissible assertion, whereas knowledge sets the standard for a good assertion. On this view, your assertion should express knowledge in the way that you should switch numbers with the elderly woman in the office. We discourage and disapprove of assertions that do not express knowledge, just as we discourage and disapprove of not accommodating the elderly woman.

15In earlier work, I suggested that this slightly more complicated account of assertional norms has some virtues (Turri 2014a). In particular, I argued that it could explain much of the observational data discussed above in Chapter 1 without implying that reasonable false assertions are impermissible. When I first conceived of this account, I was responding to repeated objections from professional philosophers who told me, in no uncertain terms, that reasonable false assertions were pretheoretically compelling counterexamples to the knowledge account. I did not find them to be compelling and I thought the familiar distinction between impermissible and blameworthy performances was at least a plausible diagnosis of what might be going on (Williamson 2000: ch. 11; see also DeRose 2002). Nevertheless, mindful of my interlocutors’ intelligence and standing, I found the situation somewhat unsettling. I wondered, was I just in the grip of a theory? Was I systematically misinterpreting my social and introspective observations? From a purely theoretical perspective, classifying reasonable false assertions as bad but permissible seemed like a potentially good compromise.

16However, the question is not purely theoretical. There is a fact of the matter about how these assertions are ordinarily viewed. When the relevant empirical work was done, the results — especially the results on excuse validation — undermined the initial motivation for postulating the more complicated account. Empirical investigation supplemented the theoretical discussion at a critical juncture, ruling out mistaken objections and thereby avoiding unmotivated amendments and complications. Thus, in purely dialectical terms, no compromise is called for.

17Nevertheless, serious inquiry is always about more than dialectic. It is ultimately about learning the truth, uncovering the facts of interest, and the facts do not compromise. Perhaps more complicated accounts deserve consideration for independent reasons. I have discussed one such account here in the hope of setting a constructive example. In particular, I hope it exemplifies how, if the need arises, to rethink the normative relationship between knowledge and assertion without throwing the proverbial baby out with the bathwater.

Super Norm?

18We routinely make assertions, form beliefs, and make decisions. These are ubiquitous and unavoidable in the course of ordinary human affairs. It is important to do these things correctly. Our individual and collective well-being often depends on it. Unsurprisingly, then, researchers are keenly interested in what such correctness consists in. While an enormous amount of work has been done on the norms of assertion over the past decade, a related, albeit smaller, body of work has also developed on the norms of belief and decision (or practical reasoning). One intriguing possibility is that knowledge is the norm of all three — assertion, belief, and decision. Is knowledge a super norm?

Requisite Truth

19By now the reader is utterly familiar with the view that knowledge is the norm of assertion, the wealth of theoretical and empirical evidence supporting it, and the critic’s favorite tactic of producing alleged counterexamples in response. The most common and persistent sort of example features a reasonable false assertion. Interestingly, the exact same tactic is used in response to the hypotheses that knowledge is the norm of belief and of decision.

20According to the knowledge account of the norm of belief, you should believe a proposition only if you know that it is true (Williamson 2000: 255-6; see also Sutton 2007; Bach 2008: 77). Much less evidence supports this view about belief than its analog about assertion (for some bits and pieces, see Huemer 2007, 2011; Bird 2007; Turri 2011; Littlejohn 2013). Critics object to the knowledge account of belief by, again, claiming that it is “completely implausible” (McGlynn 2013: 390) and that it faces intuitively compelling counterexamples. Foremost among these are cases of reasonable false belief (Conee 2007; Benton, 2012: 6).

21According to the knowledge account of the norm of decision, you should base decisions on a proposition only if you know that it is true (Hawthorne 2004: 29–30; Hawthorne & Stanley 2008; Montminy 2013). Again, less evidence supports this view about decision than its analog about assertion (for some bits and pieces, see Hawthorne & Stanley 2008). And, once again, critics claim that it faces obvious counterexamples and is at odds with our ordinary practice of evaluating decisions. Foremost among the complaints are, yet again, cases of decision based on reasonable false beliefs (Hill & Schechter 2007: 115; Douven 2008: 106–07 n. 9).

22All three knowledge accounts — of assertion, belief, and decision — face the same objection. In each case critics object that the account is highly counterintuitive and revisionary. The alleged counterexamples focus on reasonable false beliefs. The implication is that an account that respects ordinary practice will feature a non-factive norm. Critics have developed a variety of non-factive views. Perhaps the most popular view is that evidence or justification is the norm of belief and decision.

23As discussed in Chapter 3, a series of behavioral experiments showed that critics misdescribed the ordinary way of evaluating reasonable false assertions. This raises the prospect that critics have also overstated their objections to factive accounts of the norms of belief and decision too. Another series of experiments tested this possibility (Turri 2015d). People evaluated beliefs and decisions in cases where a known highly reliable source provides an agent with evidence that a certain proposition was true.

24Consider Mario, who manages human resources for a company with thousands of employees. He cannot keep track of all their names by memory, so he maintains a detailed inventory of them. He keeps the inventory up to date. He knows that the inventory is not perfect, but it is extremely accurate. Today his colleague informed him that the immigration office called. If the company employs someone named “Rosanna Winchester,” then Mario needs to make an appointment to revise paperwork with immigration, which will take several hours. But if they do not have an employee by that name, then Mario does not need to make an appointment. Mario consults the inventory. It says that he does have an employee by that name. At the end of the story, one group of people was told that the inventory was right. Another group of people was told that the inventory was wrong. Should Mario believe that they employ someone by that name? Should Mario make an appointment with the immigration office?

25These results show that critics of factive accounts have mischaracterized the ordinary way of evaluating beliefs and decisions. When the proposition was true, the vast majority of people said that Mario should believe that they employ someone by that name, and that Mario should make an appointment. But when the proposition was false, only a very small minority said that Mario should believe the proposition, or that Mario should make an appointment.

26When explaining their mistaken interpretation of cases like Mario’s, critics appeal heavily to the fact that the agent has “excellent reasons” or is “epistemically justified” or has sufficient “evidence” (Douven 2008: 106 n. 9; Hill & Schechter 2007: 115; Kvanvig 2009: 145) for the proposition in question. This seems to assume something important about evidence. In particular, it seems to assume that, on any particular occasion, the quality of evidence is insensitive to the truth of the matter. For example, consider Mario’s inventory of employee names that he knows to be extremely reliable. When Mario consults the inventory, it says that he has an employee by a certain name. The evidence this provides Mario is equally good regardless of whether the inventory is right on this particular occasion. Or so critics assume.

27Something like this view of evidence is popular among contemporary philosophers (e.g. Chisholm 1989: 76; see also BonJour 2003: 185–86; Lehrer & Cohen 1983). As one leading epistemologist puts it, consider a “typical case” where there is “nothing odd” and “things are exactly as the person believes them to be.” Compare that to an “unusual case” where “the person has that very same evidence, but the proposition in question is nevertheless false.” The “key thing to note” here is that in each case the person “has exactly the same reasons for believing exactly the same thing.” Consequently, if the person has a good reason to believe the proposition in either case, then the person has a good reason in both cases (Feldman 2003: 29).

28It turns out that this “truth-insensitive” view of evidence is neither natural nor intuitive. The same line of experiments just discussed also investigated people’s evaluation of evidence. Changing the truth value of the proposition radically changed how people rated the person’s evidence for the proposition. When Mario’s inventory was accurate, people overwhelmingly said that his evidence was very good. But when it was inaccurate, very few people said that the evidence was very good. Instead the central tendency was one of ambivalence about the evidence’s quality. A related series of experiments showed that this very same pattern of truth-sensitivity emerges for people’s judgments about whether a belief is justified, rational, responsible, and reasonable (Turri in press a).

29Overall, then, the evidence supports two conclusions. First, it disproves the accusation that factive accounts of belief and decision are counterintuitive or revisionary. The effect of truth on these evaluative judgments was not only statistically significant but also extremely large. From this I conclude belief and decision probably have factive norms. Second, it strongly suggests that critics of factive accounts of the norms of assertion, belief, and decision have been relying on an idiosyncratic view of evidence. In particular, critics have falsely assumed that “it is beyond question at an intuitive level” that the quality of someone’s evidence for a proposition is insensitive to the proposition’s truth value (BonJour 2003: 186).

Requisite Knowledge

30Truth and knowledge are the two leading contenders for a factive norm of belief and decision. The line of research under discussion also investigated the underlying causal relationships among people’s evaluations of beliefs, decisions, evidence, knowledge, and truth (Turri 2015d). When controlling for the influence of truth and knowledge judgments, evaluations of evidence (i.e. how good someone’s evidence is) did not affect the evaluation of beliefs and decisions (i.e. whether someone should believe or base decisions on a proposition). By contrast, when controlling for the influence of truth and evaluations of evidence, knowledge judgments still deeply affected the evaluation of beliefs and decisions. Crucially, knowledge judgments also mediated truth’s effect on these evaluations. This suggests that truth judgments influence these evaluations because they influences knowledge judgments.

31A knowledge account of the norms of belief and decision can easily explain these findings. Knowledge requires truth, so the knowledge account predicts that truth will influence the evaluations. But a truth account cannot easily explain the powerful independent influence that knowledge exerts on these evaluations. Neither can a truth account easily explain why knowledge mediates truth’s influence. Overall, a knowledge account is a much better fit for the evidence at hand.

Inside and Out

32A final set of experiments accentuated the deep normative connection between knowledge and the evaluation of decisions.

33It is well documented that people often misunderstand probabilities (e.g. Kahneman & Tversky 1972). But even when they understand the probabilities, interesting differences arise depending on the statistical information’s character. For example, mock jurors make different liability judgments across conditions where they assign equal probability to the defendant’s guilt (Wells 1992). Suppose that Smith’s dog was hit by a bus but no one witnessed the accident. In one version of the case, jurors learn that 80% of the buses operating in the town belonged to the Blue Bus Company, and 20% belonged to the Grey Bus Company. People estimate that it is 80% likely that a Blue bus killed the dog, but they tend to disagree that the jury should find the Blue Bus Company liable. In another version of the case, jurors learn that shortly before the accident, a weigh station attendant made a log entry on the bus that eventually killed the dog. The log says, “Blue bus.” It is known that 80% of “Blue bus” entries in the log correctly identify a Blue bus, and 20% incorrectly identify a Grey bus. Again people estimate that it is 80% likely that a Blue bus killed the dog, and they tend to agree that the jury should find the Blue Bus Company liable.

34Related to this finding is the “gatecrasher paradox” — if 95% of attendees snuck into the rodeo without paying, then why should the rodeo organizers not be entitled to sue a random attendee for non-payment, or even all attendees (Cohen 1981)? More generally related is the preference for “clinical” to “statistical” decision procedures. People prefer to make decisions based on “observations or impressions” specific to the case at hand rather than empirically established statistics, even when decades of scientific evidence show that relying on the statistics produces better results (Dawes, Faust & Meehl 1989: 1673; see also Dawes 1996; Meehl 1954).

35One way to unify these findings draws on the distinction between “inside” and “outside” probabilistic information (Lagnado & Sloman 2004; see also Kahneman & Tversky 1982). “Inside” information concerns a specific item in the case at hand. For example, the weigh station attendant entered information about the very bus that hit the dog. “Outside” information is generic and concerns distributions of properties or patterns. For example, a certain percentage of buses operating in town that day belonged to the Blue Bus Company. The attendant’s log book and the distribution of busses each makes it likely that a Blue bus hit the dog, but only the former does so from the “inside.” People are less likely to judge that the company is liable based on “outside” evidence.

36It has recently been shown that the inside/outside difference also affects knowledge judgments (Friedman & Turri 2015). For instance, suppose that Bob wonders whether his spider plant contains the (fictitious) chemical aracnium. Bob might consult a book on spider plants which reports that 99% of spider plants contain aracnium (outside information), or Bob might conduct a test showing that his spider plant is 99% likely to contain aracnium (inside information). People are more likely to attribute knowledge to Bob when he conducts the test, even though the likelihood is the same in both cases.

37Could it be, then, that the inside/outside difference affects judgments about what people should decide to do because it affects judgments about what they know? That is, do knowledge judgments mediate the inside/outside effect on decision evaluations? A recent experiment was designed specifically to answer this question, and the answer appears to be “yes” (Turri, Friedman & Keefner in press: Experiment 4). Participants read a story about Gary, who is suing the Blue Cab Company. Gary’s prize-winning rose garden was destroyed by a taxi cab that drove on to his front lawn. During the trial, jurors learned that only two cab companies operate in the town: the Blue Cab Company and the Green Cab Company. At this point, the story ends in one of two ways. One group of people was told that according to a computerized analysis of the video footage from when Gary’s garden was destroyed, “80% of cabs on the road were Blue Cabs” (outside information). The other group was told that according to the analysis, “the cab was 80% likely to be a Blue Cab” (inside information). People were then asked to rate the probability that a Blue cab destroyed the garden, whether the jurors know that a Blue cab destroyed the garden, and whether the jurors should rule against the Blue Cab Company.

38People in both groups responded to the probability question similarly (on average, they judged the probability to be just below 80%), but they gave significantly different answers to the questions about what the jurors know and should decide. In other words, the inside/outside difference affected knowledge judgments and decision evaluations, but it did not affect probability estimates. Outside information made people significantly less likely to attribute knowledge. It also made them significantly less likely to say that the jurors should rule against the company. Critically, statistical analysis showed that the inside/outside effect on decision evaluations was completely mediated by knowledge judgments. Once we control for the influence of knowledge judgments on decision evaluations, the inside/outside difference has no further effect on decision evaluations. The same pattern was observed using several different examples pertaining to legal and medical decision‑making. A follow-up study investigated whether attributions of certainty also mediated the inside/outside effect on decision evaluations (Turri, Friedman & Keefner in press, Experiment 5). Researchers found no evidence that certainty mediated the effect.

39Altogether, this suggests that the inside/outside difference affects decision evaluations because it affects knowledge judgments. An easy explanation for this result is that people’s decision evaluations are based on their knowledge judgments: they think that what we should do depends on what we know. In a word, knowledge is the norm of decision.

Intuitive Connections

40More general theoretical considerations also support knowledge accounts of belief and decision. If, as many researchers suspect, belief is best understood as assertion to oneself (e.g. Sellars 1963: 180; Dummett 1981: 362; see also Adler 2002), then the knowledge account of assertion entails a knowledge account of belief. Alternatively suppose, as many other researchers expect, that belief is prior to assertion in the order of explanation and assertion is best understood as the expression of belief (e.g., Searle 1979; Bach & Harnish 1979). In that case, the knowledge account of belief could help explain why the knowledge account of assertion is true (Bach 2008). Either way, knowledge accounts of belief and assertion form a natural pair. Moreover, for my part, I am suspicious of the idea that one could be adequately positioned to properly believe and assert a proposition while simultaneously being inadequately positioned to make decisions based on it. That is, I find the following combination of claims counterintuitive: with respect to a certain proposition, you should believe it, and you should assert it, but you should not make decisions based on it. Further investigation of the matter is obviously required. But if my suspicion is well founded, then the knowledge account of decision comes along for the ride.

41Of course, however theoretically elegant and satisfying such a unified normative pantheon might be, the world is under no obligation to cooperate in such matters. Much more relevant is the empirical evidence discussed above.

A Coincidence?

42If it is true that knowledge is the norm of assertion, belief and decision, it seems highly unlikely that this is just a coincidence. If it is not a coincidence, then it is very interesting to ask why these activities all share a common standard. Some have argued that there are serious difficulties in providing a theoretical explanation for why they share a common standard (Brown 2012: 123). But the argument rests on several questionable assumptions. First, it assumes that the norms in question impose perfect requirements (i.e. they are “exceptionless” standards) (Brown 2012: 129). I am sympathetic to this assumption when it comes to assertion but, as mentioned earlier in this chapter, it should not be treated as a permanently fixed point. Instead we should be open to the possibility that as we deepen and broaden our investigation, earlier assumptions should be revised. Second, the argument assumes that an explanation for a common standard should not depend on “controversial assumptions” (Brown 2012: 130). But here “controversial assumption” seems to mean “goes against assumptions popular in recent analytic epistemology.” The track record of recent analytic epistemology, however, is not very good and I am disinclined to show such deference. Third, the argument assumes that we have certain “intuitions about assertion and practical reasoning” (Brown 2012: 139). In particular, it assumes that how much is at stake directly affects our intuitive evaluation of assertions and decisions. But recent work on the psychology of these evaluations suggests the opposite conclusion (Turri & Buckwalter in press).

43Fourth, and finally, the argument assumes that theories should be measured against alleged intuitions about highly artificial and fanciful thought experiments. For example, one such thought experiment begins by asking us to imagine a subject, Luke, who knows that he lives in a universe ruled by a “whimsical god” who “severely punishes anyone” who bases a decision on a false proposition, even though this god allows false assertions. This creates the possibility, we are told, that Luke should assert a proposition that he should not base decisions on (Brown 2012: 141). Another thought experiment asks us to imagine someone, call her Jessica, setting knives at a dinner table. Jessica believes that there are knives on the table when she is accosted by a skeptic who argues that she has “no evidence” for believing propositions about “the external world,” such as the proposition that there are knives on the table. We are then told that Jessica does not seem to be in “a good enough position” to say that there are knives on the table, even though “it seems that she is still in a good enough position” to believe this proposition and base decisions on it (Brown 2012: 140). I find that things seem otherwise to me. But by now we should be highly suspicious of philosophers’ claims about what is intuitive in such cases. Equally importantly, it is ill advised to use such peculiar cases to seriously test theories.

44Thus, if knowledge is the norm of assertion, belief, and decision, then I submit that we should think that this is no accident, that there is a deeper explanation, and that uncovering it promises to illuminate important properties of assertion, belief, or decision. There seem to be at least two basic approaches to a deeper explanation here. On the first approach, knowledge is most fundamentally the norm of assertion, belief, or decision, and it is the norm of the other two because of this fundamental fact. So, for example, it could be that knowledge is the norm of decision and because of this it is the norm of belief and assertion too. On the second approach, knowledge is the fundamental principle of normative social cognition and, in virtue of this, is applied to the evaluation of all three activities, and likely others too. Testing between these two approaches, and perhaps others, is a worthwhile task for future work.

Why Knowledge?

45An assertion should express knowledge because knowledge transmission is the point of the practice of assertion. This is the proximate explanation for why knowledge is the norm of assertion, which I defended at the end of Chapter 1. But other important questions remain unanswered. Why does knowledge play this role? Is it just an accident that knowledge plays this role, or is there something about the nature of knowledge that best suits it for the role? The fact that these questions remain open does not call into question the obvious fact that knowledge is the norm of assertion. Instead, it highlights the need for further investigation. It also presents an exciting opportunity for cross-fertilization between philosophy and the social and life sciences. In an effort to encourage research in this direction, I will offer a hypothesis informed by decades of findings on animal communication.

46Even the most rudimentary forms of life, including bacteria, communicate (Crespi 2001: Waters & Bassler 2005; Keller & Surette 2006). Animal communication is rooted in the detection of information about the animal’s environment. Cues are detectable properties of conditions that interest the animal such as the presence of a predator, rival, potential mate, or food source. The function of sense organs is to monitor for relevant environmental cues. Communication is based on signals, a special kind of cue whose function is to provide information to another organism for use in making decisions.

47Communication is an adaptive behavioral trait shaped by natural selection (Darwin 1872; Lloyd 1971; Otte 1974; Bradbury & Vehrencamp 2011). Communication systems were selected for and evolved because they benefit sender and receiver (Maynard Smith & Harper 2004). Adaptive and stable communication systems make it worth the sender’s effort to send signals and worth the receiver’s effort to monitor and parse signals. In social species, improved decision making by other group members tends to benefit the sender. For example, social animals, such as the southern mountain cavy or the black-tailed prairie dog, benefit from foraging in larger groups (Taraborelli 2008; Devenport 1989; see also Lima & Bednekoff 1999). In larger groups, individuals can spend less time actively scanning for predators and more time feeding, so helping group members avoid predation benefits alarm callers, and receivers obviously benefit from avoiding predation. Benefit to the sender can be amplified when group membership is positively correlated with kinship (Hamilton 1964; Sherman 1977; Reeve 1997). To take a less obvious example, prey and predator have a mutual interest in sharing certain information. Prey will often stare intently at nearby predators and follow their movements. This signals to the predator that it has lost the element of surprise, which is often enough to call off the hunt (Hasson 1991; FitzGibbon 1994; Zuberbühler, Jenny & Bshary 1999). Prey obviously benefit from not being hunted, and predators benefit from avoiding hunts with a very low probability of success.

48When the sender and receiver benefit similarly from how receivers respond to signals, receivers can count on honest signals. But when the preferences of sender and receiver diverge, senders can also benefit from sending dishonest signals. For example, in some species, over two-thirds of predator alarm calls are false and are often merely intended to scare conspecifics away from a preferred food source or to gain mating opportunities (Haftorn 2000; Wheeler 2009; Bro Jørgensen & Pangle 2010; Wheeler & Hammerschmidt 2013). What keeps senders from routinely sending false or misleading signals when it could benefit them? The simple answer is that, in the long run, receivers adapt: they evolve to better detect dishonesty in a signal, ignore certain signals, and attend to more honest signals. Stable and enduring communication systems thus include features that promote honest signaling. (In the animal communication literature, these features are often called “honesty guarantees.” But this term has potential to mislead because the “guarantees” often simply make honesty likely enough rather than guarantee it.)

49Researchers have identified several mechanisms that promote honest signaling. Here I will focus on two of them. The first mechanism is to attend preferentially to “performance signals,” which only some signalers can produce (Hurd & Enquist 2005). Some performance signals are indexed to physical characteristics such as body size. For instance, smaller toads cannot croak as deeply as larger toads, so lower-frequency croaks are restricted to larger specimens (Davies & Halliday 1978). Tigers mark territory by scratching tree trunks as high as they can reach, so scratch height is indexed to body size (Thapar 1986; Bradbury & Vehrencamp 2011: 297). Signalers cannot send dishonest signals that their body size prevents them from sending.

50Other performance signals are “information constrained” (Hurd & Enquist 2005). For instance, consider the earlier example of pursuit-deterrent signaling by potential prey. An antelope stares at a lioness in the brush and follows her movements, thereby signaling to the lioness “both its alerted state and the futility of continuing the hunt. This signal can be performed only by a signaler who knows the location of the hidden predator” (Hurd & Enquist 2005: 1160). To take another example, neighboring sparrows usually share at least two songs in their repertoire. Sparrows view established neighbors as less threatening than strangers (Temeles 1994). A neighbor’s song will be heard frequently, even when the neighbor is respecting the bird’s territory. A sparrow typically responds to a neighbor’s song by singing a different song it shares with the neighbor (“repertoire matching”), which expresses tolerance. By contrast, a sparrow typically responds to a stranger’s song by imitating it (“song matching”), which expresses aggressive intent (Vehrencamp 2001). Since repertoire matching “requires knowledge of the singer’s repertoire,” or having “committed that bird’s repertoire to memory,” it is an informationally-constrained signal of neighborhood (Beecher, Campbell, Burt, Hill & Nordby 2000: 22, 25).

51The second mechanism is social policing, which involves testing for honesty and retaliating for dishonesty. This is important for signals whose form is arbitrarily associated their significance (“conventional signals”). For example, in some sparrow species the amount of dark plumage on the head and throat is correlated with social dominance (Rohwer 1977). This visible marker of dominance is a “badge of status” that allows conspecifics to settle disputes over resources without resorting to potentially harmful fighting (Dawkins & Krebs 1978). The benefit of a large badge is deference from individuals with small badges. But if a larger badge confers such advantages, then why do subordinates not simply molt into plumage that resembles a higher rank? Because a system of “social control” actively prevents it (Moller 1987). Individuals with large badges frequently challenge one another, ensuring that pretenders will be exposed. In experiments that artificially enlarged the badges of subordinate individuals, the altered individuals suffered “social persecution” and, eventually, bouts of self-imposed “exclusion” from the flock (Rohwer 1977: 116). As two ethologists memorably put it, “This persecution resulted in a nearly fourfold increase in the rate of attacks upon these subordinates and was so severe that several of the dyed birds resorted to visiting the food patch alone” (Rohwer & Rohwer 1978: 1013). Social punishment of dishonest signalers has also been observed in lizards and wasps (Thompson & Moore 1991; Tibbetts & Dale 2004; Tibbets & Izzo 2010).

52Behavioral ecologists describe “receiver retaliation” as a “behavioral rule” that disincentivizes dishonest conventional signaling. Indeed, they consider this rule to be “the key factor” that inhibits dishonesty and makes efficient conventional communication systems evolutionarily stable (Bradbury & Vehrencamp 2011: 411). Absent retaliatory costs, dishonesty would proliferate and eventually conventional signals would just be ignored. If the signals are ignored, then senders gain no advantage from producing them and will eventually stop producing them. Conventional communication would be abridged severely, if not abrogated entirely.

53Primates retaliate against conspecifics both for providing false information and for withholding relevant information — for being “detected as a cheater” (Hauser 1992: 12137). But retaliation does not always take the form of outright aggression. Sometimes the cost is diminished reputation and distrust from other group members, known as “skeptical responding” (Cheney & Seyfarth 1988; Gouzoules, Gouzoules & Miller 1996). The sophistication of skeptical responding in some monkey species is truly remarkable. For example, a vervet monkey who gives false leopard alarm calls will eventually be ignored on subsequent leopard alarm calls, but this skepticism is not transferred to that monkey’s eagle alarm calls. Vervets have multiple calls indicating the presence of another group of vervets (“intergroup calls”). The calls are very different acoustically, including a longer “wrr” and a shorter “chutter.” A vervet who repeatedly gives false “wrrs” will also have its subsequent “chutters” ignored too. In other words, vervets respond skeptically based on a caller’s history as well as a call’s abstract semantic properties (Cheney & Seyfarth 1988). Skeptical responding has been observed in other primates and ground squirrels too (Wheeler & Hammerschmidt 2013; Hare & Atkins 2001).

54Just like animal communication more generally, human communication is an adaptive trait that benefits sender and receiver. Human communication is subject to the same general evolutionary pressures as any animal communication system. Accordingly, we should expect human communication to exhibit important similarities to animal communication systems. Spoken human language is a paradigm example of conventional communication. An assertion’s content is arbitrarily associated with its form and human speech is very cheap to produce. Thus the question arises, what prevents senders from dishonestly asserting to their own advantage? Of course, not infrequently humans do lie and mislead (Feldman, Forrest & Happ 2002; Weiss & Feldman 2006; Vrij 2008). So more specifically the question is, what prevents senders from dishonestly asserting enough to destabilize the practice?

55In light of all the observational and behavioral data discussed in the preceding chapters, I propose that the answer is social policing of an information constraint. The information constraint is to have detected or discovered the fact in question. Knowledge just is a true belief manifesting one’s powers to detect, discover, or remember a fact (Reid 1764/1997; Greco 2010; Turri 2015b; Turri in press d; Turri in press f). So the information constraint is knowledge, as the researchers quoted above realize. Each part of this proposal has precedent in animal communication more generally. On the one hand, social policing is a mechanism for stabilizing conventional communication systems that has evolved in many species. On the other hand, information-constrained signaling is a standard feature of many animal communication systems. I propose that the human practice of assertion is characterized by a combined application of these two ancient themes: a socially policed knowledge rule. Our following this rule stabilizes and sustains the practice of assertion. In this sense, the knowledge rule helps constitute our social practice of assertion.

56We test for honesty and accuracy by challenging assertors with questions that explicitly refer to or imply knowledge (see the evidence on challenges in Chapter 1). If the situation calls for it, we will make observations, calculations or consultations of our own to verify the statement. The cost of dishonest or inaccurate assertion is normally a diminished reputation. A dishonest assertion or an outright lie can earn you the label of “liar” and the hostility and disadvantages that go along with it. More serious consequences are possible, including resorting to legal or violent means. A pattern of well-intentioned but ignorant assertions will lower people’s estimation of your competence and lead them to trust you less.

57Reputational costs need not result from an explicit invocation of the relevant rule. As noted above, monkeys and ground squirrels lose trust in previously inaccurate signalers. But presumably these animals are not conscious of the behavioral rule they are following. Similarly, human infants express surprise when someone falsely identifies an object, just as they express surprise when someone correctly identifies an object that they have not seen (Koenig & Echols 2003; Onishi & Baillargeon 2005; Baillargeon, Scott & He 2010). But, again, presumably human infants are not conscious of the behavioral rule they are following. Even human adults, who can become explicitly aware of the rule, often make normative social judgments implicitly and automatically (Fiske & Taylor 2012: ch. 2).

58Although we can follow rules implicitly or unconsciously, like human infants and our animal cousins, we do not always follow rules that way. Three and four year old human children spontaneously keep track of speakers’ track records of accuracy and modulate their subsequent decisions and trust of informants accordingly (Koenig, Clement & Harris 2004; Koenig & Harris 2005; Birch, Vauthier & Bloom 2008). By this age children learn better from people to whom they attribute knowledge and their learning is “based on judgments about speakers’ knowledge states” (Sabbagh & Baldwin 2001: 1067). By this age children also cite knowledge and ignorance to explain people’s verbal performances. When asked why someone was “not good at answering questions,” even three year olds say that it is because the person “didn’t know” (Koenig & Harris 2005: 1266 ff.). And when asked why they are good at answering questions, children will say, “Because I know.” Human adults go beyond even this impressive level of awareness of the content of the rules to an understanding of the conditions that give rise to practices with those very rules. Human practices are extraordinary in part because of our ability to reflect upon, identify, and think critically about the rules that structure them.

59We humans are unique in our range of expressive powers. We can communicate with one another about a limitless number of topics, both real and imagined. But this does not imply that the rules which structure and sustain our communicative practices are unique. We face problems similar to those faced by other social species, and the structure of these problems may force natural selection to favor similar solutions repeatedly. When it comes to sustaining communication systems, knowledge is an ancient solution.

60Before concluding this chapter, I would like to comment briefly on the conception of knowledge presupposed by this hypothesis. Earlier I defined knowledge as a true belief manifesting one’s powers to detect, discover, or remember a fact. However, “belief” might not be the right word here. “Representation” might be better. I say this for three reasons. First, in some cases, people reliably attribute knowledge at higher rates than they attribute belief (Myers-Schulz & Schwitzgebel 2013; Murray, Sytsma & Livengood 2012). Second, in many perfectly ordinary contexts, people’s knowledge attributions are not based on belief attributions — the latter do not predict or guide the former (Turri & Buckwalter in press; Turri, Buckwalter & Rose under review). Third, there is abundant evidence that non-human primates attribute knowledge to others, but there is currently no clear evidence that they attribute beliefs or even have the concept of belief (Kaminski, Call & Tomasello 2008; Hare, Call & Tomasello 2001; Flombaum & Santos 2005; Santos, Nissen & Ferrugia 2006; Melis, Call & Tomasello 2006; Marticorena, Ruiz, Mukerji, Goddu & Santos 2011; Martin & Santos 2014; Martin & Santos in press). In light of these three points, it starts to seem unlikely that knowledge requires belief, at least as those categories are understood in ordinary social cognition. Instead, knowledge seems to require a “thinner,” more minimal representation of the relevant fact (Buckwalter, Rose & Turri 2015). This conception of knowledge is not unique to humans but instead seems to be part of primate social cognition more generally.

61If our concept of knowledge is part of the primate social-cognitive system, then perhaps we can gain further insight into why knowledge, rather than belief or evidence, is so central to normative social cognition in humans. Our normative practices did not emerge out of thin air. They are part of our heritage and so must be based on concepts and categories that our ancestors applied and were sensitive to, at least implicitly. As already mentioned, there is considerable evidence that non-human primates attribute knowledge to others and, moreover, that they use these attributions to guide decision-making. But there is no clear indication that non-human primates ever attribute beliefs. Similarly, there is no indication that non-human primates think of others as having evidence, at least insofar as this is separable from an attribution of knowledge. Accordingly, we might reasonably expect that human normative social cognition would, at least in the first place, rely on attributions of knowledge rather than belief or evidence.

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search