

It seems natural to think that sometimes distinct responses to the same background evidence can be fully and equally rational. For instance, given the same evidence, two rational people might have different credences in the proposition that Hillary Clinton will be elected president in 2016. If this permissivist view is correct, it appears that two agents can reasonably maintain a disagreement. I provide two arguments to the contrary. If we take Permissive rationality seriously, then we should be conciliatory in the face of disagreement. 


This paper motivates and develops an epistemic decision theory for partial believers. We observe that purely epistemic actions themselves sometimes affect the state of the external world. For example, suppose that if you have credence x that you'll be able to leap successfully across a chasm, then the objective chance you avoid falling to your death is also x. You'll be perfectly accurate only if you become certain you'll make the jump or certain you won't make the jump. If accuracy is the sole epistemic good, then it appears that these actions are the most epistemically rational. We argue that this verdict is mistaken. While some version of causal decision theory is appropriate for practical rationality, it's inappropriate for epistemic rationality. Pragmatic decisions have a worldtomind direction of fit, whereas epistemic decisions have a mindtoworld direction of fit. Therefore, even though accuracy is all that's ultimately epistemically valuable, the correct epistemic decision theory does not mandate that an agent bring about states of the world that increase her accuracy. 


We use a theorem from Schervish (1989) to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she’ll quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.



I exploit formal measures of accuracy to prove two theorems. First, an agent should expect to give her peers equal weight. On one natural understanding of 'peer', that means an agent should expect to split the difference. Second, I show that splitting the difference will nevertheless tend to result in overly uncertain credencescredences too far from 0 or 1. Furthermore, if the agent takes herself and her advisor to be reliable, she should tend to give the party who turned out to have a stronger opinion more weight. These theorems combine to constrain both synchronic expectations and longrun behavior. An agent's response to peer disagreement should over the course of many disagreements average out to equal weight. However, in any particular disagreement, her response should usually deviate from equal weight and depend on the actual credences she and her advisor report. 


A number of recent arguments purport to show that imprecise credences are incompatible with accuracyfirst epistemology. If correct, this conclusion suggests a conflict between evidential and alethic epistemic norms. In the first part of the paper, I claim that these arguments fail if we understand imprecise credences as indeterminate credences. In the second part, I explore why agents with entirely alethic epistemic values may end up in an indeterminate credal state. Following William James, I argue that there are many distinct alethic values a rational agent can have. Furthermore, such an agent is rationally permitted not to have settled on one fully precise value function. This indeterminacy in value will sometimes result in indeterminacy in epistemic behaviour—i.e., because the agent’s values aren’t settled, what she believes may not be either. 


Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic Utility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expected utility. Once we vary the importance of propositions across worlds, however, normal measures of epistemic utility become improper. I argue there isn’t any good way out for EUT. 


Leitgeb and Pettigrew argue that (1) agents should minimize the expected inaccuracy of their beliefs, and (2) inaccuracy should be measured via the Brier Score. They show that in certain cases, these claims require an alternative to Jeffrey Conditionalization. I claim that this alternative is an irrational updating procedure and that the Brier score, and quadratic rules generally, should be rejected as legitimate measures of inaccuracy. 


This is a paper for a symposium on Richard Pettigrew's "Accuracy and the Laws of Credence." Pettigrew offers new axiomatic constraints on legitimate measures of inaccuracy. His most interesting axiom, which he calls Decomposition, stipulates that legitimate measures of inaccuracy evaluate a credence in part based on its level of calibration at a world. I argue that if calibration is valuable, as Pettigrew claims, then this fact is an explanandum for accuracyfirst epistemologists, not an explanans for three reasons. First, the intuitive case for the importance of calibration isn’t as strong as Pettigrew believes. Second, calibration is a perniciously global property that both contravenes Pettigrew’s own views about the nature of credence functions themselves and undercuts the achievements and ambitions of accuracyfirst epistemology. Finally, Decomposition introduces a new kind of value compatible with but separate from accuracyproper in violation of Pettigrew’s alethic monism. 


Evidential and Causal Decision Theory are the leading contenders as theories of rational action, but both face fatal counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory (FDT), which simultaneously solves both sets of counterexamples. Instead of considering which physical action of theirs would give rise to the best outcomes, FDT agents consider which output of their decision function would give rise to the best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered (even counterfactually) to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows FDT agents to outperform CDT and EDT agents in, e.g., the presence of accurate predictors. While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a nontrivial theory of logical counterfactuals and algorithmic similarity. 


Higherorder evidence is evidence that you have wrongly or rightly handled other evidence according to rational norms. According to the accommodationist position, you should generally take such evidence into account by adjusting your credences to account for your own potential irrationality. Although accommodationism is intuitive, it recommends some odd behavior, such as violating conditionalization and Good’s Theorem. I argue that, on the accommodationist picture, some higherorder evidence is best understood as a kind of information loss akin to forgetting, which results in the same type of epistemic behavior. 
