top of page
  • Writer's pictureJake

The "Red Faces" problem for Bayesian Conditionalization

Updated: Apr 14

A short, March 2024 article from Jon Williamson gives a cute puzzle for Bayesian conditionalization. Since it fits in with my recent reading and posts, I think is worth a brief discussion.


Due to the brevity of the article, I suggest that people read that first, though I'll paraphrase as I understand it. Any errors will, of course, be mine.


The Red Faces Puzzle


Consider the following propositions:

X = "There is a fair, six sided die to be rolled."

E = "Each face of the die is colored either red, blue, or green."

A = "The outcome of rolling the die will be >= 3."

R = "When the die is rolled, the uppermost face is red."


Let us initially accept the hypothesis that conditional beliefs are identical to conditional probabilities (CBCP):


CBCP: For any belief function B, there is some probability function P such that

for all A and C.


Based on this, a number of probability assignments seem reasonable. For example, we might say that:

is a reasonable, initial belief, perhaps based on the symmetry of our information.


It also seems imminently reasonable to say something like (1*):


That is, given that the die is fair and that there are 4 ways for the die to come up >=3, it's reasonable to think that the colors of the faces don't matter here, so it's reasonable to set P(A|X ^ E) to match P(A|X) = 2/3.


Great. Now, it also seems reasonable to set (2*):


That is, it's reasonable to set P(R | X^E) to match P(R|E) = 1/3, since, given the symmetry of our information, whether or not the die is fair isn't relevant to whether or not one color will show up more than the others.


Finally, let us consider that we gain the piece of information that the red faces on the die are exactly those that are >=3. That is, we learn (3*):


Given this new information, it seems reasonable, based on symmetry, to set (4*):


That is, if the die is fair, and the faces can be colored red, green, or blue, and the red faces are precisely the ones that are >=3, then it seems reasonable to think that the probability of rolling a value >=3 should match the objective odds from (1*).


The problem, though, is that these assignments in (1* - 4*) are incompatible as long as they are expressed as conditional probabilities. As Williamson says, there is no coherent probability function that satisfies these conditions. So, if we accept that these assignments are rational, which certainly seems imminently obvious, we cannot identify conditional belief with conditional probability as described by CBCP.


Bad Conditionalization and a Way Out


A Bayesian way out is described in Williamson 2010 (reiterated in brief in the article), where Bayesians ought to concede that while Bayesian conditionalization is commonly fruitful, there are conditions under which it fails to provide reasonable results. Maximum entropy inference (MaxEnt), as advocated by the Objective Bayesian, has the advantage of matching Bayesian conditionalization for the good cases while providing sane results in the conditions under which conditionalization fails. That is, maximum entropy inference provides a Bayesian procedure which denies identifying conditional belief with conditional probability but makes sense of why we might think so.


Williamson 2010 (Section 4.2, citing previous results from Peter Williams (1980) and Ted Seidenfeld (1986)) describes four criteria which, if met, forces the MaxEnt inference to match conditionalization, and which, if not met, MaxEnt provides the superior inference (according to Williamson). In the case of the Red Faces problem, it is not safe for the Bayesian to conditionalize on the information in (3*) because it violates the fourth criterion: that the result of conditionalizing on the new information (3*) continues to satisfy the constraints specified by (1*) and (2*). Since conditioning on (3*) does not continue to respect the existing constraints from (1*) and (2*) in this case, conditionalization is an inappropriate way to set rational beliefs here.


However, the Objective Bayesian does not accept CBCP, identifying rational belief instead with certain unconditional probability distributions that maximize entropy relative to the constraints of the data. Thus, we can accept unconditional versions of (1*), (2*), and (3*) without conflict, avoiding 'red face' embarrassment.


Consequences


Little puzzles like this, which are comparably simple to explain, are, in my opinion, useful tools in trying to at least crack an opposing point of view. In this case, the target is Bayesians who are committed to conditionalization, which usually includes subjectivists and empirical Bayesians, but which also (sometimes) includes folks like Jaynes who occasionally supported conditionalization. (I can certainly see some frequentists erroneously trying to use this against Bayesianism in general as well, though they should check the planks in their own eyes...)


Since the Objective Bayesian already accepts MaxEnt inference as the more fundamental inferential form, they naturally avoid the Red Faces problem. However, subjectivists may have a harder time, since they tend to be the strongest holdouts in favor of conditionalization as a rational norm. If they wish to retain their commitment, they may have to argue that there's something wrong with the way the agent in the puzzle assigns their beliefs. This is just not an option for most subjectivists: even if they might not prefer the equivocal kinds of assignments used in the puzzle and might prefer different ones, they don't have grounds to say that these assignments are irrational on their otherwise very permissive view, which is what would be needed to diffuse the problem.


The problem is perhaps even more stark for "empirically-minded" Bayesians who accept a Calibration norm together with conditionalization since there's no way to conditionalize to match empirical chances here subject to the constraint in (3*). If these Bayesians, then, reject conditionalization for a non-conditionalizing, MaxEnt-like procedure, it's not clear to me that they can convincingly continue to reject Equivocation. Of course if they accept Equivocation too, they're just Objective Bayesians.


Other Thoughts


Certainly over the past several blog posts, as I've been reading more of Williamson's work, and as I start reading some of Rosenkrantz's movements in this direction from almost 50 years ago, I've mentally gone back and forth on some common forms of Bayesian practice, like that espoused by Gelman which lean toward Calibration via things like model checking to incorporate frequency information (if only via post hoc selection bias). I think puzzles like this explain why a cleaner break from the otherwise quasi-subjectivist Bayesian statistical practice is necessary, at least in terms of foundations. After all, model checks against frequentist criteria certainly seem like an ad hoc fix to make Bayesianism appropriate for the sciences, and together with problems with conditionalization itself, it starts to look like Bayesian statistics, as practiced today, is overdue for some of the more foundational revisions proposed nearly a half-century ago.


 

References:


Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis, Third Edition. CRC Press.


Jaynes, E. T., & Bretthorst, G. L. (2003). Probability Theory: The Logic of Science. Cambridge University Press.


Seidenfeld, Teddy (1986). Entropy and uncertainty. Philosophy of Science 53 (4):467-491.


Rosenkrantz, R. D. (1978). Inference, method and decision: Towards a Bayesian Philosophy of Science. Reidel.


Williams, Peter. (1980). Bayesian Conditionalisation and the Principle of Minimum Information. British Journal for The Philosophy of Science. 31. 131-144. 10.1093/bjps/31.2.131.


Williamson, Jon. (2010). In Defence of Objective Bayesianism. OUP Oxford.


Williamson, Jon. (2024). ‘Conditional beliefs aren’t conditional probabilities’, The Reasoner. Available at: http://blogs.kent.ac.uk/thereasoner/files/2024/03/TheReasoner-182.pdf.

3 views0 comments

Kommentare


bottom of page