This event was financed by
PhilPharm project
that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme – G.A. n. 639276
2019 Program
Friday 30th, August
Chair: Barbara Osimani
9:00-9:10 Greetings by the Dean of the Faculty, Prof. Marcello D'Errico
9.10-9:20 Greetings by the Dean of the Department, Prof. Adriano Tagliabracci
9:30-10:30 Keynote Speech: Teddy Seidenfeld - Some Finitely Additive (Statistical) Decision Theory or How Bruno de Finetti might have channeled Abraham Wald
10:30-11:00 Coffee Break
Chair: William Peden
11:00-11:30 Michele Crescenzi - Learning to Agree Over Large State Spaces
11:30-12:00 Brett Topey - Expected Accuracy and the Planning Framework
12:00-12:30 Adrian Erasmus - Expected Utility, Inductive Risk, and the Consequences of P-Hacking
12:30-13:30 Lunch Break
Chair: Giacomo Sillari
13:30-14:00 Momme von Sydow - Bayesian Logic as Generalised Bayesian Occam’s Razor Vindicates the Lockean Thesis
14:00-14:30 Liam Bright - Can A Veritist Justify A Universal Prohibition on Fraud?
14:30-15:00 Silvia Milano - Ur-prior Conditionalization, t-Independence and Diachronic Rationality
15:00-15:30 Serena Doria - Extensions of Coherent Upper Conditional Previsions Defined by Hausdorff Outer Measures to Unbounded Random Variables
15:30-16:00 Coffee Break
Chair: Francesco De Pretis
17:30-18:00 Oliver Braganza - Proxyeconomics, the Inevitable Corruption of Proxy-based Competition
18:00-18:30 Jimin Kwon - A Hybrid Approach to the Gene Drive Policy: Using Risk-Weighted Expected Utility Theory as a Version of the Precautionary Principle
18:30-19:00 Nicola Matteucci - On Scientific Misconduct and Captured regulation. Assessing . the Italian Policy-making on Gambling
19:30-Late Conference Dinner
Saturday 31st, August
Chair: Jan Sprenger
9:30-10:00 Pavel Janda - Accuracy-based Norms of Rationality for Epistemic Games with Imperfect Recall
10:00-10:30 Giacomo Sillari - Cognition, Coordination, and Common Belief
10:30-11:00 Stefano Bonzio - Sure-wins under Coherence
11:00-11:30 Coffee Break
Chair: Liam Bright
11:30-12:00 Miriam Bowen - Suspended Judgment: A Solution to the Probabilistic Liar
12:00-12:30 Richard Lohse - On Richard Pettigrew’s Latest Accuracy-first Argument for Probabilism
12:30-13:00 Tamaz Tokhadze - Evidence and Background Information
13:00-14:00 Lunch Break
Chair: Mantas Radzvilas
14:00-15:00 Keynote Speech: Andrés Perea - Common Belief in Rationality in Games with Unawareness
15:00-15:30 Coffee Break
Chair: Stefano Bonzio
15:30-16:00 Mantas Radzvilas - Optimal Lying and Lie-Detection in Bayesian Persuasion Games with Costs and Punishments
16:00-16:30 William Peden - Predictive Confirmation and the Paradox the Ravens
16:30-17:00 Barbara Osimani – Science as a Signaling Game: Statistical Evidence in Strategic Environments
17:30-19:30 Guided Tour of Ancona
Keynote Talks
Andrés Perea (Maastricht)
Common belief in rationality in games with unawareness
Abstract: This talk will be about static games with unawareness, where players may be unaware of some of the choices that can be made. That is, different players may have different views on the game. We propose an epistemic model that encodes players' belief hierarchies on choices and views, and use it to formulate the basic reasoning concept of common belief in rationality. We also design a recursive elimination procedure that yields for every possible view the choices that can rationally be made under common belief in rationality.
Teddy Seidenfeld
Some Finitely Additive (Statistical) Decision Theory or How Bruno de Finetti might have channeled Abraham Wald
Abstract: We examine general decision problems with loss functions that are bounded below. We allow the loss function to assume the value infinity. No other assumptions are made about the action space, the types of data available, the types of non-randomized decision rules allowed, or the parameter space. By allowing prior distributions and the randomizations in randomized rules to be finitely additive, we prove very general complete class and minimax theorems. Specifically, under the sole assumption that the loss function is bounded below, we show that every decision problem has a minimal complete class and all admissible rules are Bayes rules. We also show that every decision problem has a minimax rule and a least-favorable distribution and that every minimax rule is Bayes with respect to the least-favorable distribution. Some special care is required to deal properly with infinite-valued risk functions and integrals taking infinite values. The talk is based on the paper, "What Finite Additivity Can Add to Decision Theory" -- Mark Schervish (CMU) Teddy Seidenfeld (CMU) Rafael Stern (Universidade Federal De São Carlos), and Jay Kadane (CMU), forthcoming in Statistical Methods and Applications.
Jan Sprenger (Turin)
Conditional Degrees of Belief and Bayesian Inference with Idealized Models
Abstract: Bayesian reasoners standardly interpret the probability of a statistical hypothesis, p(H), as their subjective degree of belief that this hypothesis is true. This "textbook Bayesianism", however, does not square well with the practice of statistical inference in science. In this contribution, I expound this dilemma and I develop a solution based on what I call the suppositional analysis of conditional degrees of belief. As I show, the analysis has important repercussions for all of Bayesian inference, including our interpretation of Bayes's Theorem.
Contributed Talks
Stefano Bonzio (Marche Polytechnic University)
Sure-wins under coherence
Abstract
In a series of seminal contributions [1, 2], Bruno de Finetti provided a rather general justification for the probabilistic representation of rational beliefs. To this end, he identifies degrees of belief, for an event to occur, with the price of gambles in a suitably defined betting situation, described below (see also [3]).
Let us fix a finite boolean algebra A and a finite subset Φ = {a1, . . . , an} of A, the set of events. A bookmaker B publishes a book, i.e., a complete assignment β : Φ → [0, 1]. A gambler G chooses stakes σ1, . . . , σn ∈ R and, in order to bet over the event ai (with i = 1, . . . , n) pays σi · β(ai) to B: G, in the possible world v (a homomorphism from A to {0, 1}), will get σi (from B), provided that the event ai occurs (i.e. v(ai) = 1), and 0 otherwise. Notice that stakes may be negative. A negative stake σi means reversing the bet (or, betting against ai), namely receiving (from B) −σi · β(ai) and paying −σi in case ai takes place.
The book β is said to be coherent if G has no choice of (real-valued) stakes σ1, . . . , σn such that, for every valuation v
The left hand side of the above inequality expresses the bookmaker’s balance. Therefore a book is coherent if prevents B from what in the literature is known as a sure-loss. De Finetti’s theorem, in its algebraic formulation, states a book β is coherent if and only if it extends to a finitely additive probability measure1 over the Boolean algebra generated by the set of events.
In the seminar we will focus on de Finetti’s theorem in both its algebraic and geometrical aspects. Given these results for granted, we we will be concerned with the problem of estab-lishing if a gambler, playing on two or more books β1, . . . , βm on the same set of events has a sure-win strategy, i.e., if there exists a choice of stakes which, once suitably placed, ensures him a strictly positive gain in all possible worlds. A sure-win strategy for the gambler is defined as follows.
Definition 1. Let β1, . . . , βm be coherent books on Φ = {a1, . . . , an}. We say that a gambler has a sure-win strategy on β1, . . . , βm if for each event ai ∈ Φ there is a book βw(i) amongst β1, . . . , βm such that the book βw : ai → βw(i)(ai) is incoherent.
In other words, the gambler is allowed to operate crossed bettings on the different books β1, . . . , βm, respecting the rule that, for each event ai, he has to choose a book (among β1, . . . , βm) to bet with. If β1, . . . , βm are coherent books, we say that they are jointly co-herent if no gambler has a sure-win strategy on them. In case m = 2, we will also say that β1 is jointly coherent with β2 if β1, β2 are jointly coherent.
In the talk, we will provide a geometric and an algebraic characterization of joint coherent books. In particular we will describe the space of books which are jointly coherent with a fixed book β over a set of events Φ. 1Recall that a finitely additive probability measure over a boolean algebra A is a map P : A → [0, 1] such that P(1) = 1 and P(a ∨ b) = P(a) + P(b), provided that a ∧ b = 0.
1 Recall that a finitely additive probability measure over a boolean algebra A is a map P : A → [0, 1] such that P(1) = 1 and P(a ∨ b) = P(a) + P(b), provided that a ∧ b = 0.
References
[1] B. de Finetti. Sul significato soggettivo della probabilit`a. Fundamenta Mathematicae, 17:289–329, 1931.
[2] B. de Finetti. Theory of Probability, vol. 1. John Wiley and Sons, New York, 1974.
[3] T. Flaminio, L. Godo, and H. Hosni. On the Logical Structure of de Finetti’s Notion of Event. Journal of Applied Logic, 12(3):279–301, 2014.
Miriam Bowen (University of Leeds)
Suspended Judgment: A solution to the Probabilistic Liar.
Abstract
The following are examples of self-reference:
(1) What sentence (1) says is not true.
(2) What sentence (2) says is true.
These are examples of self-reference in truth. Sentence (1) expresses a version of the Liar Paradox, any truth value assigned to this swiftly leads to contradiction. (2) expresses the Truth Teller sentence where any specific assignment of truth-status is apt to seem unprincipled. There are also analogous examples of self-reference that involve belief:
(3) I believe what sentence (3) says.
(4) I do not believe what sentence (4) says.
These are examples of doxastic self-reference that involve flat out belief states. My main focus here will be doxastic self-reference that involves degrees of belief: probabilistic self-reference. In this talk I will argue that we should appeal to suspended judgment (where this is understood as having imprecise credences) as a solution to probabilistic self-reference.
Probabilistic self-reference occurs when the truth or the chance of an event is dependent on the degree of belief (credence) of the agent. A particularly problematic probabilistic self-referential scenario is that of the Probabilistic Liar (which I’ll refer to as ProbLiar), a probabilistic analogue of the Liar Paradox.
I’ll refer to the following example adapted from Caie (2013) as ProbLiar:
(α) Cra(α) < 0.5
Where the proposition expressed by (α) is true if the agent’s credence in (α) is less than 0.5 and false otherwise.
What credence should the agent have in (α)?
There are intuitive problems with assigning any particular credence to (α). If the agent has credence less than 0.5 in (α) then (α) is true so they ought to have a high credence in it. If they do have a high credence in (α) however this makes (α) false and so they ought to have low credence in it. An ideally rational agent would know prior to having a credence the consequence of her having any particular credence. This instability of belief seems intuitively undesirable.
Moreover, the Probabilistic Liar also gives rise to contradictions between between the plausible norms of rationality; Probabilism and Rational Introspection. I will outline Caie’s (2013) presentation of the conflict between Probabilism and Rational Introspection. Caie proposes solving this conflict by rejecting Probabilism to accommodate self-referential probability. Caie’s solution however is partly based on his intuition that it seems more undesirable that a rational agent may fail to have rational introspection than a rational agent holding probabilistically incoherent credences in a proposition and its negation. I have different intuitions to Caie here. I suggest there is an alternative way of accommodating self-referential probability that does not involve having (probabilistically) incoherent credences.
I consider a doxastic attitude that is not a precise degree of belief; suspended judgment as a possible solution. If we understand suspended judgment as having imprecise credences then by adopting this attitude towards ProbLiar the agent can avoid the intuitively undesirable instability. We can also weaken rational introspection in cases of suspended judgment thus avoiding the conflict between Probabilism and Rational Introspection in a principled way.
Oliver Braganza (IEECR, University of Bonn)
Proxyeconomics, the inevitable corruption of proxy-based competition
Abstract
When society maintains a competitive system to promote an abstract goal, competition by necessity relies on imperfect proxy measures. For instance profit is used to measure value to consumers, pa- tient volumes to measure hospital performance, or the Journal Impact Factor to measure scientific value. Here we note that any proxy measure in a competitive societal system becomes a target for the competitors, promoting corruption of the measure, suggesting a general applicability of what is best known as Campbell’s or Goodhart’s Law. Indeed, prominent voices have argued that the scientific reproducibility crisis or inaction to the threat of global warming represent instances of such competition induced corruption. Moreover, competing individuals often report that competi- tive pressures limit their ability to act according to the societal goal, suggesting lock-in. However, despite the profound implications, we lack a coherent theory of such a process. Here we propose such a theory, formalized as an agent based model, integrating insights from complex systems the- ory, contest theory, behavioral economics and cultural evolution theory. The model reproduces empirically observed patterns and makes predictions at multiple levels. It further suggests that any system is likely to converge towards an equilibrium level of corruption determined by i) the infor- mation captured in the proxy and ii) the strength of an intrinsic incentive towards the societal goal. Overall, the theory offers mechanistic insight to subjects as diverse as the scientific reproducibility crisis and the threat of global warming.
Liam Bright (London School of Economics)
Can A Veritist Justify A Universal Prohibition on Fraud
Abstract
Sometimes scientists commit fraud. They knowingly enter misinformation into the publication record of science. For instance, this occurs when scientists claiming to gather data they didn't actually gather, which they tailored to fit a result they antecedently desired to support. The norms and practice of science regard such behaviour as highly anathema - a scientist caught engaged in fraud can expect to receive heavy censure, have their fraudulent work withdrawn from the literature, and there is widespread consensus that such behaviour violates standards of acceptable practice. Such censure is relatively independent of a scientist's reasons for committing fraud - be it for fame, glory, money, sheer will to deception; it does not matter, fraud is to be punished.
What is more, at first glance, all of this may seem well explained by a generally veritistic social epistemology of science. Veritists assume that scientists or scientific institutions ought seek the truth, at least about those matters deemed to be of sufficient import to be of scientific interest. Veritists hence wish to promote institutional norms that promote the acquisition and dissemination of truth. With this goal in mind a universal prohibition on fraud seems easy to explain or justify. Fraudulent claims are by their nature untruths, and their dissemination perpetuates this disvalue.
However, an interesting class of cases challenge this easy consensus that fraud is always wrong and censure worthy. These are cases wherein the scientist has committed a "noble lie" - they have committed fraud in service of (what they see to be) the truth. In particular, this occurs in cases when a scientist has fudged or misrepresented their data in order to (they sincerely believe, in light of their expert judgement) ensure that it better supports what is actually true. Misleading outliers are removed, more representative data fabricated, and so on. Assuming that at least sometimes scientists are correct (or maybe even just: sincere) in judging the immediate effects of their noble lie - they (believe they) really would promote wider acceptance of the scientifically interesting truth at stake - can the veritist justify the universal prohibition on fraud?
Of course the veritist can say, with a guilty conscience, that it is prohibitively difficult to police and enforce a more nuanced anti-fraud norm, and so while noble lies are not always wrong they ought always be punished. In this paper, however, I will explore a line of response which would allow the veritist to make a stronger claim. In particular, I will argue that recent modelling work on the ways in which scientific information is shared and used as the basis for future work shows that a universal anti-fraud norm can be justified for the veritist as a means of preventing information cascades. It thus turns out that there is a particular type of error that science would become vulnerable to if ever it were to tolerate noble lies. In this talk I will explain the nature of this error, and vindicate universal prohibitions on fraud for a veritist.
Michele Crescenzi (University of Helsinki)
Learning to agree over large state spaces
Abstract
We study how a consensus emerges in a finite population of like-minded individuals who are asymmetrically informed about the realization of the true state of the world. Agents observe a private signal about the state and then start exchanging messages. Generalizing the classical model of rational dialogues of Geanakoplos and Polemarchakis (1982), we only require that the set of states be a non-empty set. In particular, we dispense with the standard assumption that the state space is either finite or a probability space. We show that a class of rational dialogues can be found that always lead to consensus provided that three main conditions are met. First, everybody must be able to send messages to everybody else, either directly or indirectly. Second, communication must be reciprocal. Finally, agents need to have the opportunity to participate in dialogues of transfinite length.
Serena Doria (Department of Engineering and Geology, University G.d'Annunzio)
Extensions of coherent upper conditional previsions defined by Hausdorff outer measures to unbounded random variables.
Abstract
Coherent upper conditional previsions are defined to the class of bounded and unbounded random variables with finite Choquet integral with respect to Hausdorff outer measures. Continuity properties are investigated and in particular conditions are given such that the Monotone Converge Theorem holds.
Adrian Erasmus (University of Cambridge)
Expected Utility, Inductive Risk, and the Consequences of P-Hacking
Abstract
P-hacking is the manipulation of research methods and data to acquire statistically significant results. It includes the direct manipulation of data and/or opportunistic analytic tactics. Direct manipulation involves experimental strategies such as dropping participants whose responses to drugs would weaken associations; redefining trial parameters to strengthen associations; or selectively reporting on experimental results to obtain strong correlations. Opportunistic analytic tactics include performing multiple analyses on a set of data or performing multiple subgroup analyses and combining results until statistical significance is achieved.
P-hacking is typically held to be epistemically questionable, and thus practically harmful. This view, which I refer to as the prevalent position, typically stresses that since p-hacking increases false-positive report rates, its regular practice, particularly in psychology and medicine, could lead to policies and recommendations based on false findings. My first goal in this paper is to formulate the prevalent position using expected utility theory. I express a hypothetical case of p-hacking in medical research as a decision problem, and appeal to existing philosophical work on false-positive report rates as well as general intuitions regarding the value of true-positive results versus false-positive ones, to illustrate the precise conditions under which p-hacking is considered practically harmful. In doing so, I show that the prevalent position is plausible if and only if (a) p-hacking increases the chance that an acquired positive result is false and (b) a true-positive result is more practically valuable than a false-positive one.
In contrast to the prevalent position, some claim that experimental methods which constitute p-hacking do play a legitimate role in medical research methodology. For example, analytic methods which amount to p-hacking are a staple of exploratory research and have sometimes led to important scientific discoveries in standard hypothesis testing. My second aim is to bring the prevalent position into question. I argue that although it is usually the case that refraining from p-hacking entails more desirable practical consequences, there are conditions under which p-hacking is not as practically perilous as we might think. I use the formal resources from expected utility theory from the first part of the paper, and lessons learned from the arguments surrounding inductive risk to articulate the conditions under which this is the case. More specifically, I argue that there are hypotheses for which p-hacking is not as practically harmful as we might think.
Pavel Janda (Gdansk University)
Accuracy-based Norms of Rationality for Epistemic Games with Imperfect Recall
Abstract
I aim at finding accuracy-based epistemic norms for agents with cognitive limitations. In this paper, I am interested in norms of rationality for an agent who is playing a game with imperfect recall. I will argue that following the norm of minimisation of expected total inaccuracy suggested by Kierland and Monton is the optimal ex ante plan before such an epistemic game starts and also a time-consistent strategy.
Jimin Kwon (University of California San Diego)
A Hybrid Approach to the Gene Drive Policy: Using Risk-Weighted Expected Utility Theory as a Version of the Precautionary Principle
Abstract
In this paper, I propose a novel understanding of the relationship between two general normative theories for decision making under uncertainty, the precautionary principle (PP) and risk-weighted expected utility theory (REU). PP recommends preventing potential harm to human health and the environment. REU holds that rational agents ought to maximize risk-weighted expected utility. Both theories capture the fact that many people may be more averse to potential harms than they are inclined to seek potential benefits; this can be explained in terms of the asymmetry between harm and benefit. Furthermore, I argue that REU can be understood as a version of PP. One advantage of such an interpretation is that REU introduces appropriate constraints to PP which the latter decision rule lacks. I will demonstrate this using a test case in public policy decisions regarding the use of gene drives against malaria. I aim to show a generalization of REU for imprecise probabilities, considering that public policy decisions inevitably involve group judgments where group members’ credences in potential outcomes will not be unanimous.
In [Section I], I introduce the basic framework and terminology that will be useful in the subsequent sections. After an introduction to the orthodox formal decision theoretic approach [Section II], [Section III] shows that expected utility (EU) theory ignores important facts about risk-aversion. [Section IV] introduces Lara Buchak’s alternative to EU called risk-weighted expected utility theory. [Section V] discusses Daniel Steel’s formulation of the precautionary principle and his concerns against EU. In [Section VI], I defend PP against the common objection that PP qua decision rule results in paralysis. [Section VII] argues that the real problem of PP is the lack of sufficient constraints that are needed to reach a conclusive decision among non-dominated options. In [Section VIII], I suggest that REU is a version of PP. After a brief discussion of Steel’s reasons for rejecting the standard decision theoretic approaches, I show the similarities between REU and PP and how each can contribute to public policy decision making. [Section IX] demonstrates how the hybrid version can be used for gene drive policy, before concluding that the combination of PP and REU is a plausible tool for fine-grained decision making.
Richard Lohse (University of Konstanz)
On Richard Pettigrew’s latest accuracy-first argument for Probabilism
Abstract
Bayesianism aims to give quite strong rationality constraints for credences, i.e. for how strong we believe in propositions. Most importantly, Baysianism claims that, in order to be epistemically rational, our credences need to be probabilistic, i.e. they need to satisfy the Kolmogorov axioms for probability. This coherence norm is called Probabilism. On the basis of this fundamental tenet, Bayesians try to give a general normative account of reasoning, including scientific reasoning.
However, it all depends on Probabilism. One very promising strategy to argue for this claim is given in the accuracy-first framework. The underlying picture is that truth is an epistemic goal, indeed, the only epistemic goal. In accuracy-first arguments, a mathematic characterization of closeness to truth, i.e. accuracy, is given. It is then shown via a theorem that probabilistic credences are in some sense systematically closer to the truth than non-probabilistic credences.
Richard Pettigrew, in his wonderfully perspicuous and comprehensive 2016 book “Accuracy and the Laws of Credence”, surveys all previous accuracy-first arguments for Probabilism and argues persuasively that each of them fails. He then presents his own accuracy-first argument. I will argue that the central premise, Decomposition, is unjustified. Moreover, I will give examples in which Decomposition seems false.
Decomposition claims that distance from truth decomposes linearly into two parts. First, distance from being well-calibrated, where a credence function is well-calibrated if 80% of the propositions believed to degree 0.8 are true, and so forth. Second, distance between being well calibrated and truth. Pettigrew thinks that Decomposition captures the intuition that in certain ceteris paribus situations, closeness to being well calibrated is epistemically valuable, and that this justifies Decomposition.
I argue that this intuition can only justify his premise if it is accompanied by an intuition that in ceteris paribus situations, closeness to being well-calibrated correlates with closeness to truth. Offhand, I claim, there is no such intuition. Moreover, I consider examples for these ceteris paribus situations in which there is an intuition against such a correlation.
Nicola Matteucci (Marche Polytechnic University)
On scientific misconduct and captured regulation. Assessing the Italian policy-making on gambling
Abstract
Social sciences - including economics - struggle for more rigorous scientific methods and comprehensive research agendas, able to include factors traditionally assumed away as irrelevant, or even treated as exogenous. In fact, beside engendering epistemic faults and material errors, simplistic methods and reductionist agendas may result in biased scientific advice for regulators and policymakers: this typically happens when the excluded factors drive a substantial part of the overall dynamics of the investigated phenomenon, or yield outcomes (or by-products) neglected by the theoretical or empirical model/method chosen.
Moreover, the economics of regulation and political studies point out that different types of stakeholders may actively engage in instrumental relationships with the scientific community, as a way to strategically manoeuvre the latter’s activity and advice; typical goals are to enact strategies of corporate influence on scientific findings, to capture the regulator (Stigler, 1971) and obtain more favourable ad-hoc decisions, and to include private vested agendas into policy-making decisions possessing a wider impact. Finally, another recent empirical finding that deserves a fuller investigation is that, to detect cases of regulatory capture or lobbied policy-making might turn out to be very difficult even in contemporary democratic regimes; accountability processes are less than perfect, in the interested jurisdictions (Raul and Dwyer, 2003); moreover, asymmetries of information that some industry bodies and formally independent agencies enjoy may structurally dampen any scientific assessment of their activity (for an example applied to the rating agencies, see Gila and Miscali, 2012). Finally, the contemporary predominance of private markets for data (Warren, 2002) and the ambivalences of copyrights regimes may further obscure the relevant empirical evidence.
Many different literatures have long explored these themes, from several disciplines and school orientations. Philosophy of science and epistemology, that have the oldest traditions of reflection on the nature and evolution of the scientific method, deeply impacted on the epistemic and methodological debates unfolding in social sciences. In economics, a pioneer case is the now famous contribution of Milton Freidman (1953) which, despite having simpler methodological ambitions, spurred a large and unexpected debate on the epistemic nature of the economic theory and its fallacies. Similar issues been recently echoed in the current diatribe on the scientific validity of the economic policies supporting austerity, based on works that were later reconsidered as grounded on inadequate models, biased evidences or even patent computing errors (Reinhart and Rogoff, 2010; IMF 2010; Herndon et al., 2014; Fatás and Summers, 2018; Stiglitz, 2018).
Another investigated direction has been the epistemic faults arising from the economic discipline losing its early transdisciplinary orientation and becoming analytically reductionist (Bruni and Sudgen 2007), in a process that de facto has rendered it less amenable to internal empirical validation. More and more, the kind of broad societal challenges concerning a more complex world, questions the adequacy of conventional disciplinary boundaries, and seems to call for a tighter dialogue between the analytical paradigms used by different sciences (Matteucci and Orazi, 2018).
The quest for transdisciplinarity seems to be particularly relevant for those works of applied economics dealing with psychological or health-care related issues, where the multidimensionality of the measured constructs (such as subjective well-being or health status) does require a truly transdisciplinary agenda, capable of capturing the intrinsic complexity of the basic constructs and their systemic causal relationships. At the same time, the health domain is a critical one to assess the scientific rigour of the models and evidence supporting a given policy or regulation. In fact, these cases often involve large communities (for eg., the polluting effects or the negative externalities produced by some risky socio-economic activities), where the collective consequences of the harms can be substantial (both in intensive and extensive terms) and materialise in a longer time-frame, not conventionally captured by the time horizon of some disciplines. It is not a coincidence that one of the first occurrence of the so-called debate on “negationist agendas” arose from the US tobacco sector, before coming to permeate the current “global warming” debate (Oreskes and Conway, 2010): this well illustrated the high stakes that any epidemiological enquiry may involve in a globalised market economy.
The liberalisation of the gambling sector and its negative externalities on societies and economies provide another original case of application of the above themes. In fact, in developed countries, following the neo-liberal rhetoric (Schmidt and Thatcher, 2013), several Governments have enacted a series of normative acts and regulation that have unleashed a large scale diffusion of gambling activities. These facts do provide an ideal test-bed case for assessing the scientific rigour of the supporting policy rationales, and to check whether the liberalisation of the sector has collided with other constitutional provisions valid in those liberalising States: a main example concerns the
protection of the minors and the disadvantaged citizens.
Finally, the gambling sector is particularly interesting for our investigation, because the increasing diffusion of digital devices (or Information and Communication Technologies, ICT) employed by the industry to offer gambling services does automatically generate a precious amount of data that may profitably enter the scientific and regulatory agendas as open Government data (OGD).
The present paper, by conducting an analysis of the above literatures, frames a set of empirical hypothesis and testable implications that inform an ideal programme of scientific evaluation of the gambling liberalisation policies so far enacted in Italy.
Silvia Milano (University of Oxford)
Ur-prior conditionalization, t-independence and diachronic rationality
Abstract
Bayesian agents are committed to probabilism and conditionalization. Probabilism says that the agent’s credences obey the probability calculus. Conditionalization says that the agent’s credence function is updated via Bayes' rule.There are several arguments for Bayesian conditionalization, including Dutch book and accuracy-dominance arguments (Greaves and Wallace, 2006; Briggs and Pettigrew, 2018).
However, it is widely acknowledged that conditionalization places unrealistic demands on agents who might forget. Forgetting is not irrational in the sense that it is not under the agent’s control, but standard Bayesian conditionalization gives no guidance in how to update credences upon forgetting.
Standard conditionalization is also not able to capture credence changes that happen as a result of learning new self-locating propositions (Titelbaum, 2016). For example, suppose that you are waiting at the traffic lights, waiting for the light to turn green. When the light turns green, intuitively you learn something new, and act on it by starting the car. However, this learning experience cannot be captured by standard conditionalization, because the set of centred worlds that form the content of your evidential state has shifted, not shrinked.
Ur-prior conditionalization can solve both issues (Meacham, 2016). Ur-priors are defined relative to the full algebra of centred propositions, and an agent’s posterior evidential probabilities at any point in time can be seen as updating the ur-prior on the centred propositions that are compatible with the agent’s current evidence. This offers a natural and simple solution to the issue of both forgetting and up-dating on centred propositions, but in turn raises some interesting interpretive issues.
In this paper, I consider in particular whether and how ur-prior conditionalization is compatible with diachronic rationality. Responding to arguments put forward by Das (2019) and Meacham, I show that ur-prior conditionalization, plus a condition that I call t-independence, are necessary conditions for avoiding diachronic Dutch books. T-independence requires the ur-prior probability of any uncentred proposition to be independent of any purely centred (purely temporally indexical) proposition. I introduce t-independence formally, defend its intuitive plausibility, and show that it implies diachronic rationality.
Barbara Osimani (Univpm)
Science as a Signaling Game: Statistical Evidence in Strategic Environments
Abstract
As a response to the “reproducibility crisis” and to a general crisis of trust towards the scientific enterprise (Edwards and Roy 2017, Vazire 2017), various initiatives are being promoted in order to foster transparency ( see e.g. Open Science Movement, AllTrials Campaign, Sense about Science). We advance that game theory should be used to explain different kinds of biases and identify solutions to them. We will focus on two specific settings: interactions between the pharmaceutical industry and authorities that regulate drug approval (as well as other components of the medical ecosystem) and scientific publication systems.
William Peden (Polytechnic University of the Marche)
Science as a Signaling Game: Statistical Evidence in Strategic Environments
Abstract
In the Paradox of the Ravens, a set of otherwise intuitive claims about evidence seems to be inconsistent. Most attempts at answering the paradox involve rejecting a member of the set of claims, which seems to require a conflict either with commonsense intuitions or with some of our best theories of evidence. In contrast, I argue that the appearance of an inconsistency is misleading: ‘confirms’ and cognate terms feature a significant ambiguity when applied to universal generalisations. In particular, the claim that some evidence confirms a universal generalisations ordinarily suggests, in part, that the evidence confirms the reliability of predicting that something which satisfied the antecedent will also satisfy the consequent. I distinguish between the familiar relation of confirmation simpliciter and what I shall call “predictive confirmation”. I use them to formulate my answer, illustrate it in a very simple probabilistic model, and defend it against objections. I conclude that, once our evidential concepts are sufficiently clarified, there is no sense in which the initial claims are both plausible and inconsistent.
Mantas Radzvilas (Munich Center for Mathematical Philosophy, LMU)
Optimal Lying and Lie-Detection in Bayesian Persuasion Games with Costs and Punishments
Abstract
If the aim of pharmaceutical regulators is to prevent dangerous and ineffective drugs from entering the market, the procedures they implement for approval of drugs ought to incentivize the acquisition and accurate reporting of research on the questions of safety and effectiveness. These interactions take the form of Sender-Receiver games, in which pharmaceutical companies seeking approval of a drug conduct the research themselves, and report the results to the regulators. Of course, the companies may be inclined to falsify these reports, despite the potential costs. The main aim of this work is to give a formal model for this kind of interaction and to identify the mechanism that is optimal for the regulatory body, and by proxy, the public, when the costs of information, lying, and the detection of lies are nontrivial. In this model, the Sender incurs costs via noisy acquisition of information by sequential testing, falsification of reports of individual tests, and punitive measures upon detection of falsified reports. Further, the model has an epistemic facet, in which the Sender believes that the likelihood of being caught lying is increasing in the number of falsified reports. The Receiver is cautious in the sense that she doesn’t rule out the possibility that falsification is a viable strategy for the Sender’s type, and she makes probabilistic inferences about the Sender’s type and strategy from the messages she receives. The ability of the Receiver to detect lies is limited by Receiver’s type and the costs of her verification procedure. We identify the optimal mechanism which incentivizes the Sender to invest maximum effort into drug testing, and to always send a truthful report to the Receiver.
Giacomo Sillari (Luiss Guido Carli, Rome)
Cognition, coordination, and common belief
Abstract
Epistemic, practical or strategic rationality are all insufficient to fully explain successful coordination in situations in which there are multiple possible ways to coordinate action. Thus, merely considering preferences and beliefs of coordinating agents does not fully account for coordinating behavior. Various attempts exist to overcome this explanatory gap: From team-reasoning, to simulation theory, to the idea that coordination may in fact not involve beliefs at all (Guala 2018). Indeed, the first philosophical treatment of coordination and convention (Lewis 1969) is based on ascribing reasons to believe to coordinating agents, rather than actual beliefs. However, one can imagine (or recall) situations in which coordination has followed a correct reconstruction of other agents' thought processes and a correct belief ascription, as well as coordination failures due to incorrect belief ascription. Both belief-less and belief-based coordination exist, and the distinction echoes the one between implicit and explicit epistemic states: In certain cases my epistemic attitudes justifying a purportedly coordinating action are not made explicit, either because I do not need to make them explicit, or because I cannot. When successful coordination is accompanied by implicit epistemic states, coordination is belief-less (at least explicitly), behavior is automatic (cf. Bicchieri 2006, 2016 on awareness of social norms), immediately context-driven, possibly rule-based and can be framed in a "we-perspective"; when it is accompanied by explicit states, coordination requires beliefs about others and about their beliefs, forces an "I-perspective" and context influences behavior via belief-formation. Explicit epistemic states would be needed when coordination is sophisticated, e.g. when there is no convention or norm in place, when there are concurrent conventions, when a salient option exists but is not transparent, when the reference network is unclear and so on. The role of common belief is discussed in this context.
Tamaz Tokhadze (University of Sussex)
Evidence and Background Information
Abstract
According to Williamson (2000), one’s evidence is identical to one’s knowledge.
E=K: One’s evidence includes p if and only if one knows p.
In this paper, I argue that the left-to-right direction of the E=K biconditional is false: the proposition that one’s evidence includes p does not imply that one knows p. The position that I will be defending requires a radical break from E=K. This is so because I will be arguing for a strong thesis, according to which for all hypothesis H and one’s total body of evidence E, E always includes propositions that one does not know.
Briefly, my argument can be explained by discussing the confirmation theorists’ favorite example regarding the color of ravens.
Suppose you go on fieldwork in order to gather some data concerning the raven hypothesis. The first raven that you observe is black. Relative to the corpus of your background information this new piece of evidence supports the hypothesis that all ravens are black. In symbols, if H1 denotes the hypothesis that the probability of x being black is 1 and E denotes the newly acquired evidence, then:
P(H1|E)>P(H)
In my paper, I show that this trivially plausible inference – the inference from the observation of a black raven to the raven hypothesis – presupposes that you are maximally certain in the following proposition:
U: The future will resemble the past with respect to the color of ravens.
Let propositions H1, H2,…, Hn denote all mutually exclusive and exhaustive hypotheses of the form: for all x, if x is a raven, the probability of x being black is c. By the law of general additivity:
P(H1)+P(H2 )…,+P(Hn)=1
Now, let U denote the proposition that the future will resemble the past with respect to the color of ravens. By the law of total probability:
P(U)=P(U│H1 )*P(H1 )+P(U|H2 )*P(H2 )+⋯,+P(U│Hn )*P(Hn)
Each hypothesis of the form “the probability of x being black is c” already individually implies that the future will resemble the past with respect to the color of ravens. Thus, given that the probability of U equals to:
P(U)=P(U│H1 )*P(H1)+P(U|H2 )*P(H2)+⋯,+P(U│Hn )*P(Hn)
We can simplify the above equation as:
P(U)= P(H1 )+P(H2 ) + ...+ P(Hn)
Therefore:
P(U)=1
Thus, before observing any ravens, you should already be maximally certain in U.
This result is a problem for Williamson’s theory of evidence. Assigning meaningful posterior probabilities to the hypotheses of the form “the probability of x being black is c”, implies that you take U for granted. So, U should be included in your background evidence. Given E=K, the fact that U is included in your background evidence implies that you know U. Therefore, Williamson’s theory implies that you can know the most general claims about the color of ravens without actually observing any raven! This is surely absurd.
The problem generalizes to other, less trivial cases of probabilistic inference.
Brett Topey (University of Salzburg)
Expected accuracy and the planning framework
Abstract
Hilary Greaves and David Wallace have shown that conditionalization is the update rule that maximizes an agent’s expected accuracy – this and similar results are often taken by accuracy-first theorists to be sufficient to secure conditionalization's status as a rational requirement. As is well known, though, such results hold only because, in comparing rules for expected accuracy, we must take into account the fact that some rules are such that no agent can reasonably expect herself to be able to conform perfectly to them. (This is what allows us to avoid, for example, the result that the truth rule – i.e., the rule that directs an agent to adopt a credence of unity in all and only the true propositions – is the rule that maximizes expected accuracy.) But how, exactly, is this fact taken into account?
Broadly speaking, there are two common approaches here: we might (a) rule out by fiat rules to which a suitably idealized version of the agent can’t reasonably take herself to be able to conform or (b) allow all rules but calculate the expected accuracy of a given rule by considering what would result from a suitably idealized version of the agent planning, or trying, to conform to it. (Idealization is required so that, say, drawing conclusions by wishful thinking won’t turn out to maximize expected accuracy just because the agent can’t reasonably take herself to be able to avoid wishful thinking.) And these approaches are often taken to be competitors; Miriam Schoenfield and Robert Steel, for instance, have both suggested that the latter approach – hereafter, the planning framework – is to be preferred to the former – hereafter, the traditional framework – because only the planning framework can deliver the right verdicts about how to respond to higher-order evidence. I show, though, that if we take on board three simple observations – first, that anyone who tries to conform to some rule r thereby conforms to a rule r* that says to try to conform to r; second, that for any r, a suitably idealized agent can reasonably take herself to be able to conform to a rule r* that says to try to conform to r; and third, that when an agent can reasonably take herself to be able to conform to r, the expected accuracy of that agent trying to conform to r is equal to the expected accuracy of conforming to r – it becomes clear that these approaches in fact always agree: whenever the traditional framework tells us that r is expected-accuracy–maximizing, the planning framework tells us that trying to r is expected-accuracy–maximizing.
If this is right, then suggestions like Shoenfield’s and Steel’s must all be mistaken – what makes a difference to our verdicts isn’t which framework we adopt but how we idealize when we calculate expected accuracy.
Momme von Sydow (Ludwig-Maximilians-Universität Muenchen)
Bayesian Logic as Generalised Bayesian Occam’s Razor Vindicates the Lockean Thesis
Abstract
There is a debate both in philosophy and in psychology whether and how one can rationally reconstruct partial and full belief by using probabilities. The Lockean thesis, claiming that propositional belief can be reformulated in terms of probabilities, faces fundamental problems. This is apparent, for instance, in the lottery paradox, the preface paradox, and the conjunction fallacy debate. The induction of a logical connective relating two given predicates (like the ‘and’ in ‘Ravens are black and they can fly’) often treats logical connectives as alternative hypotheses given the evidence. This involves the goal that the best hypothesis (logically relating properties/propensities) should best (if not perfectly) correspond to the data. But given this goal, any direct application of probability seems ill-conceived from the outset, since a direct application of the additivity axiom implies that the more specific set (related to the logically stronger proposition) can never have a larger probability. For instance, in dyadic logic, propositions referring to more specific sets, when comparing conjunction, inclusive disjunction, and tautology, can never become more probable than the more general ones, e.g., Pe(B(x)&F(x)) <= Pe(B(x)orF(x)) <= Pe(B(x)TF(x)). But if a more specific hypothesis can never get a higher probability, and probability should be the adequacy measure of predication (Lockean thesis), than this yields the absurd consequence that probability could never be used to show that a more specific hypothesis is more adequate. Thus, it seems that one either has to give up the idea that more specific hypotheses can be more adequate, or that probabilities can be used as adequacy criterion. One possible way out of this apparent dilemma, is using second-order probabilities. Bayesian Logic (BL) provides a kind of inductive logic (von Sydow, 2011, 2016) that formalises a probabilistic likelihood function of how ideal noisy-logical patterns have produced some frequency data. It then uses Bayes’ theorem to formalise the posterior of the logical connectives. It is argued that BL applies the idea of ‘Bayesian Occam’s razor’ (BOR) to the induction of logical connectives given the data and generalises this to allow for exceptions. Nonetheless, the resulting probability measure, PI, of BL depend not only on extension (as standard extensional probabilities, Pe), but on intension as well. However, both are based on Kolmogorov’s axioms, but apply additivity on different levels. Hence, we advocate a kind of pluralism of supervening probability measures. Theoretically, BL is presented as a computational-level theory of adequate logical predication vindicating the Lockean thesis or, perhaps better, the ‘Bayesian thesis’. (For different rational non-additive approaches, cf. Bovens & Hartmann, 2004 and Leitgeb, 2010). Second, we investigate BL as a descriptive hypothesis for generalising the phenomenon of conjunction fallacies, discussed particularly in psychology. Some major predictions are outlined, some model variants are mentioned (based on different interpretations of ordinary-language connectives), and a selection of our experimental results is presented. Overall the results confirm BL and exclude that (supplementary) competitors are applicable here (e.g., the confirmation hypothesis). We conclude that BL (or a similar approach) is an important contribution for understanding rationality.