Program
Thursday 13th
9:00  Opening
Morning Session (Chair: Bonzio)
9:10  9:50  Flaminio
9:50  10:30  Hosni
Break
11:00  11:40  Doria
11:40  12:20  Gesuita
Lunch
Afternoon Session (Chair: Osimani)
14:00  14:40  Tentori
14:40  15:20  Crupi
Break
15:40  16:20  Festa
Excursion to Sirolo
and Social Dinner
Friday 14th
Morning Session (Chair: Sacco)
9:10  9:50  Sillari
9:50  10:30  Galeazzi
Break
11:00  11:40  Martini
11:40  12:20  Discussion
Lunch
Afternoon Session (Chair: De Pretis)
14:00  14:40  Cevolani
14:40  15:20  Osimani
Break
15:40  16:20  Sprenger
16:20  17:00  Discussion + Closure
Abstracts
Carlo Martini
Biases in scientific judgment
Since the 1950s, Paul Meehl and a number of his collaborators researched how biases affect scientifici judgment, and argued that there were differences in performance between actuarial and expert judgment. Since then, actuarial judgment and expert judgment had to be traded off against each other in sciencebased decisionmaking. In this paper I review the history of the facetoface between actuarial and expert judgment with a focus on biases. The history of the debate will suggests directions for how to handle the problem of biases in both expert and actuarial judgment.
Tommaso Flaminio
Boolean algebra of conditionals: probabilities and logic
Conditionals play a fundamental role both in qualitative and in quantitative uncertain reasoning. In the former, conditionals constitute the core focus of nonmonotonic reasoning [1, 4, 5]. In quantitative uncertain reasoning, conditionals are central both for conditional probability, and more generally, for conditional uncertainty measures [6].
In this contribution we present a construction that builds, from any Boolean algebra A , an algebra of conditionals C (A) as a suitable quotient of the Boolean algebra freely generated by A x ( A \ { ⟂ } ), [2, 3]. The algebra C (A) is called the Boolean algebra of conditionals of A . If A is finite, C (A) is finite as well and hence atomic. The atomic structure of C (A) is fully characterized from the atoms of A . Besides presenting Boolean algebras of conditionals, a main purpose of this contribution is to show these structures provide a suitable algebraic setting in which conditional probability theory and certain nonmonotonic logics can be investigated.
In particular, in the area of conditional probability theory, Boolean algebra of conditionals are applied to provide an answer to the longstanding question if a conditional probability may be regarded as a simple (i.e., unconditional) probability on conditional objects [7, 5]. In these regards, although not every simple probability on C (A) satisfies all axioms of a conditional probability on A , [6], we will show the following result.
Theorem 1 . For every positive probability measure P on a Boolean algebra A, there exists a positive probability measure µp on
C (A) which agrees with P on the basic conditionals of C (A). In other words, for every basic conditional (a  b),
Then, we introduce the logic of Boolean conditionals (LBC) and prove its completeness with respect to the natural semantics induced by the structural properties of the atoms in a conditional algebra. Further. LBC is proved to be sound and complete with respect to a suitably defined Boolean algebra of conditionals. Finally we conclude with a result to the effect that LBC is indeed a preferential consequence relation, in the sense of the wellknown System P, [1].
Aknowledgements.
The author acknowledges partial support by the Spanish Ramon y Cajal research program RYC201619799; the Spanish FEDER/MINECO project TIN201571799C21P and the SYSMICS project (EU H2020MSCARISE2015 Project 689176).
References
[1] D. Dubois, H. Prade. Conditional Objects as Nonmonotonic Consequence Relationships, IEEE Transaction on Systems, Man and Cybernetics 24(12): 17241740, 1994
[2] T. Flaminio, L. Godo, H. Hosni, On the algebraic structure of conditional events, in S. Destercke and T. Denoeux (eds.) Symbolic and Quantitative Approaches to Reasoning with Uncertainty ECSQARU 2015, Lecture Notes in Computer Science Volume 9161, 106116, Springer, 2015.
[3] T. Flaminio, L. Godo, H. Hosni, On Boolean algebras of conditionals and their logical counterpart, In: A. Antonucci, L. Cholvy, O. Papini (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty ECSQARU 2017, Lecture Notes in Computer Science Volume 10369, 246256, Springer, 2017.
[4] A. Gilio, G. Sanlippo, Conjunction, Disjunction and Iterated Conditioning of Conditional Events. SMPS 2012: 399407, 2012.
[5] I. R. Goodman, R. P. S. Mahler, H. T. Nguyen. What is conditional event algebra and why should you care? SPIE Proceedings, Vol 3720, 1999.
[6] J. Y. Halpern. Reasoning about Uncertainty . MIT Press, 2003
[7] D. Lewis. Probabilities of Conditionals and Conditional Probabilities. The Philosophical Review 85(3): 297{315, 1976.
Gustavo Cevolani
Probability, truthlikeness, and rational belief.
Keywords: rational belief, probability, truthlikeness, Lottery paradox, Preface paradox, Lockean thesis, Carneades thesis
Providing a theory of rational belief in the face of uncertainty remains a crucial issue in both (formal) epistemology and philosophy of science. Wellknown difficulties, like the Lottery and Preface paradoxes, trouble the attempt to combine a quantitative, probabilistic account of uncertain belief with our presystematic intuitions about plain or categorical belief. In this paper, I first present and motivate a nonconventional view of these issues based on the idea, called here “Carneades thesis”, that rational belief aims at truth approximation. Then, I show how this account deals with the Lottery and Preface paradoxes. Finally, I discuss the pros and cons of this proposal, as well as its connections with the Lockean thesis, the distinction between belief and acceptance, and Leitgeb’s stability theory of belief.
Serena Doria
Representations of preference orderings by coherent upper and lower conditional previsions defined with respect to Hausdorff outer and inner measures.
Sufficient conditions to assure that a maximal random variable is a Bayes random variable are given when the preference ordering is represented by coherent upper and lower conditional previsions.
Barbara Osimani
Varieties of Error and Varieties of Evidence in Scientific Inference: A Bayesian approach.
As Paul Meehl remarks, “Any working scientist” is more impressed with 2 replications in each of 6 highly dissimilar experimental contexts than he is with 12 replications of the same experiment”; this intuition has been formalised in formal epistemology as the Variety of Evidence Thesis that is, the fact that ceteris paribus, more heterogeneous evidence coming from independent sources is more confirmatory than less varied evidence. The thesis is known to fail for random evidence and for deterministically biased evidence ([Bovens and Hartmann(2003),Claveau(2013)]). However, Bovens and Hartmann’s results concerning the failure of the variety of evidence thesis (VET), mainly rely on the unreliable instrument being a randomiser, and of a very specific kind. When the rate of positive reports delivered by the instrument (no mater what the truth is) is .5, the instrument is a proper randomiser. However, as soon as such probability is higher than .5 the instrument tends to be a “yesman”, whereas it is a “naysayer” if this probability drops below .5. In the former case consistency of positive reports from the same instrument speaks in favour of it being a randomizer (and therefore weakens their confirmatory strength), whereas the opposite holds for the latter case, and that’s the reason for VET failure in the latter situation.
Furthermore, Bovens and Hartmann's results run against the “toogoodtobetrue” intuitions underpinning suspicion of bias for considerable long series of reports from the same testing instrument. This happens because having divided the hypothesis space for the truthtracking properties of the instrument in either perfectly reliable, or randomizer, then a series of consistent reports become less and less likely under the hypothesis of a randomizing instrument, and, consequently, more and more likely under the complementary hypothesis.
In order to account for the “toogoodtobetrue” intuition, and for the related suspicion of systematic bias, we developed a model where the instrument may either be reliable but affected by random error, or unreliable and systematically biased towards delivering positive reports (but nondeterministically so).
In our model the VET fails as well, but the area of failure is considerably smaller and affects borderline cases where the ratio of false to true positive reports for the two instruments become favourable for the biased one. In our case, VET failure simply follows from the fact that receiving two positive reports from the same instrument increases the probability that this is the positively biased one; and once you are there, then, if the the assumed ratio of false to true positive reports is more favourable for the biased instrument, then receiving the two reports is more confirmatory if they come from the same instrument. Otherwise, in case the ratio of false to true positive reports is unfavourable for the biased instrument with respect to the other one, then the two report are more confirmatory if they come from independent instruments, and therefore the VET holds in this case.
Also, we identified the settings where the role of the strength of the tested consequences of the hypothesis matters for VET holding/failing and discovered why. Finally, we explain why the apparently counterintuitive result in Bovens and Hartmann model, according to which the area of the VET failure grows for increasing reports.
Katia Tentori
Evaluating the descriptive accuracy of forecasting models.
Alternative measures have been proposed to evaluate the accuracy of forecasts. I will report the results of two experiments that compare the adequacy of three scoring rules as descriptions of accuracy judgments in a probabilistic prediction context

Carlo Martini: Biases in scientific judgmentSince the 1950s, Paul Meehl and a number of his collaborators researched how biases affect scientifici judgment, and argued that there were differences in performance between actuarial and expert judgment. Since then, actuarial judgment and expert judgment had to be traded off against each other in sciencebased decisionmaking. In this paper I review the history of the facetoface between actuarial and expert judgment with a focus on biases. The history of the debate will suggests directions for how to handle the problem of biases in both expert and actuarial judgment.

Gustavo Cevolani: Probability, truthlikeness, and rational belief"Keywords: rational belief, probability, truthlikeness, Lottery paradox, Preface paradox, Lockean thesis, Carneades thesis Providing a theory of rational belief in the face of uncertainty remains a crucial issue in both (formal) epistemology and philosophy of science. Wellknown difficulties, like the Lottery and Preface paradoxes, trouble the attempt to combine a quantitative, probabilistic account of uncertain belief with our presystematic intuitions about plain or categorical belief. In this paper, I first present and motivate a nonconventional view of these issues based on the idea, called here “Carneades thesis”, that rational belief aims at truth approximation. Then, I show how this account deals with the Lottery and Preface paradoxes. Finally, I discuss the pros and cons of this proposal, as well as its connections with the Lockean thesis, the distinction between belief and acceptance, and Leitgeb’s stability theory of belief.

Serena Doria: Representations of preference orderings by coherent upper and lower conditional previsions defined with respect to Hausdorff outer and inner measures.Sufficient conditions to assure that a maximal random variable is a Bayes random variable are given when the preference ordering is represented by coherent upper and lower conditional previsions.

Tommaso Flaminio: Boolean algebra of conditionals: probabilities and logicLorem ipsum dolor sit amet, consectetur adipiscing elit. Ut aliquam pretium magna, at gravida velit venenatis vitae. Nullam malesuada malesuada libero id finibus. In dapibus nulla turpis, ac feugiat dolor eleifend scelerisque. Vestibulum semper tortor non enim eleifend dictum eget et odio. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Quisque accumsan eleifend lorem ac fermentum. Integer sollicitudin facilisis est, sed suscipit lectus fringilla sit amet. Pellentesque aliquet pellentesque ex id tincidunt. Integer pulvinar laoreet fringilla. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Mauris ut dolor interdum, sollicitudin neque sed, aliquet ipsum.

Barbara Osimani, Jürgen Landes: Varieties of Error and Varieties of Evidence in Scientific Inference: A Bayesian approach"As Paul Meehl remarks, “Any working scientist” is more impressed with 2 replications in each of 6 highly dissimilar experimental contexts than he is with 12 replications of the same experiment”; this intuition has been formalised in formal epistemology as the Variety of Evidence Thesis that is, the fact that ceteris paribus, more heterogeneous evidence coming from independent sources is more confirmatory than less varied evidence. The thesis is known to fail for random evidence and for deterministically biased evidence ([Bovens and Hartmann(2003),Claveau(2013)]). However, Bovens and Hartmann’s results concerning the failure of the variety of evidence thesis (VET), mainly rely on the unreliable instrument being a randomiser, and of a very specific kind. When the rate of positive reports delivered by the instrument (no mater what the truth is) is .5, the instrument is a proper randomiser. However, as soon as such probability is higher than .5 the instrument tends to be a “yesman”, whereas it is a “naysayer” if this probability drops below .5. In the former case consistency of positive reports from the same instrument speaks in favour of it being a randomizer (and therefore weakens their confirmatory strength), whereas the opposite holds for the latter case, and that’s the reason for VET failure in the latter situation. Furthermore, Bovens and Hartmann's results run against the “toogoodtobetrue” intuitions underpinning suspicion of bias for considerable long series of reports from the same testing instrument. This happens because having divided the hypothesis space for the truthtracking properties of the instrument in either perfectly reliable, or randomizer, then a series of consistent reports become less and less likely under the hypothesis of a randomizing instrument, and, consequently, more and more likely under the complementary hypothesis. In order to account for the “toogoodtobetrue” intuition, and for the related suspicion of systematic bias, we developed a model where the instrument may either be reliable but affected by random error, or unreliable and systematically biased towards delivering positive reports (but nondeterministically so). In our model the VET fails as well, but the area of failure is considerably smaller and affects borderline cases where the ratio of false to true positive reports for the two instruments become favourable for the biased one. In our case, VET failure simply follows from the fact that receiving two positive reports from the same instrument increases the probability that this is the positively biased one; and once you are there, then, if the the assumed ratio of false to true positive reports is more favourable for the biased instrument, then receiving the two reports is more confirmatory if they come from the same instrument. Otherwise, in case the ratio of false to true positive reports is unfavourable for the biased instrument with respect to the other one, then the two report are more confirmatory if they come from independent instruments, and therefore the VET holds in this case. Also, we identified the settings where the role of the strength of the tested consequences of the hypothesis matters for VET holding/failing and discovered why. Finally, we explain why the apparently counterintuitive result in Bovens and Hartmann model, according to which the area of the VET failure grows for increasing reports.

Katia Tentori: Evaluating the descriptive accuracy of forecasting modelsAlternative measures have been proposed to evaluate the accuracy of forecasts. I will report the results of two experiments that compare the adequacy of three scoring rules as descriptions of accuracy judgments in a probabilistic prediction context.