I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets
One of the main worries with machine learning model opacity is that we cannot know enough about how ...
This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Mo...
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic...
I argue that ML models used in science function as highly idealized toy models. If we treat ML model...
Drawing on ‘interpretational’ accounts of scientific representation, I argue that the use of so-call...
Drawing on ‘interpretational’ accounts of scientific representation, I argue that the use of so-call...
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining...
This paper places into context how the term model in machine learning (ML) contrasts with traditiona...
Drawing on `interpretational' accounts of scientific representation, I argue that the use of so-call...
One of the main worries with machine learning model opacity is that we cannot know enough about how ...
This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Mo...
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic...
I argue that ML models used in science function as highly idealized toy models. If we treat ML model...
Drawing on ‘interpretational’ accounts of scientific representation, I argue that the use of so-call...
Drawing on ‘interpretational’ accounts of scientific representation, I argue that the use of so-call...
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining...
This paper places into context how the term model in machine learning (ML) contrasts with traditiona...
Drawing on `interpretational' accounts of scientific representation, I argue that the use of so-call...
One of the main worries with machine learning model opacity is that we cannot know enough about how ...
This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Mo...
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic...