Lorenzo Sartori MA
October 1st until December 31st, 2024
Affiliation: IMT School for Advanced Studies Lucca
Research for a study about:
Visual representation and experts’ interpretation
The overarching aim of my research is to improve our understanding of the challenges involved in converting knowledge about a representation into knowledge about a target system.
In the immediate future, I am interested in understanding how the interpretation of experts is involved in the evidential evaluation of visual representations, especially in medical context. During my stay at the IVC, I would like to start investigating a particularly novel area of investigation, that is, the use of AI for image analysis in clinical contexts, and the expert judgement strategies employed to assess it.
This work on AI analysis of pictures may have important philosophical implications for what concerns the concept of representation overall. In the philosophical framework that I have adopted in my PhD thesis, interpretation is what really makes something a symbol and, a fortiori, a representation of a target system. At the same time, the hope for AI technology is that it will not really need interpretation: machines will be able to detect novel patterns, objective similarities, and eventually make inferences without any higher order interpretation of images. In fact, they would be better than us, allegedly, exactly because they are freed of a pre-imposed interpretation of what they are “looking at”.
This opens a set of epistemological questions about the use of AI to interpret pictures and other forms of representations. Namely, whether we are going towards a use of representation that is interpretation-free – or at least where the concept of interpretation is not in fact appropriate in the case of AI readings of images. Or, alternatively (and in my view more plausibly), whether this alleged neutrality of AI analysis of representations is just a new form of interpretation.
Lecture
Scientific pictures, models, and their justification
Philosophy of Science Colloquium Talk
Logik Café Lecture
Date: November 07, 2024
Time: 4.45-6.15 pm
Venue: Lecture Hall 2G, NIG Universitätsstraße 7, 2nd floor, 1090 Vienna
Abstract:
In this paper, I first show that similarity accounts of scientific pictures fail with more realistic cases of scientific pictures. My primary case study is the picture of a black hole, from which I develop an interpretation-based account of picture representation analogous to how models represent: a picture represents a designated target system iff, once interpreted, it exemplifies properties that are then imputed to the target via a de-idealising function. Then, I show that justification of the inferences from pictures crucially depends on their causal mechanisms of production, in contrast with the standard justificatory strategies we employ for model inferences.