Research

I am working on the following projects.

Sample Representation in the Social Sciences

This project examines what it means for a sample to be “representative”, discusses the challenges associated with sample gathering in the social sciences, and proposes practical mitigations for those challenges. I begin by clarifying the distinction between a random sampling strategy and a representative sample such that an ideally random sampling strategy results in an ideally representative sample, in the limit, through the Law of Large Numbers. However, I argue that ideally random sampling is impossible to achieve with human subjects, and that any deviation from this ideal prevents the guaranteed link to sample representation. I then introduce a graded conception of sample representation, where a sample may be more or less representative within a specific sampling context, for a specific research purpose. I argue that this conception presents a much more realistic picture of sampling practice and how it may be improved. I end with a few practical proposals for such improvements.

Realism about Psychological Traits

This project argues the current way of measuring psychological traits or aptitudes is inadequate for providing the kind of evidence we need to draw the kind of conclusions we would like to draw from these psychological constructs. I start with a theoretical overview of the nature of traits. By looking at discussions around personality, intelligence, and well-being, I identify four important features associated with calling something a “trait”: Stability, Biological Basis, Universality, and Causal Efficacy. Second, I discuss some of the existing operationalizations used for measuring these features and criteria by which a particular measuring instrument is deemed valid or reliable. I assess the degree to which these measurement strategies can successfully produce evidence for the existence of the four features identified above. Third, I introduce a pragmatic perspective, according to which whether a trait has a certain property is important just in case we have plans to use this property of this trait for some practical goal. This picture is not meant to replace the epistemic perspective of discovering the nature of trait for the sake of psychological understanding. Instead, it is meant to provide more manageable goals in the face of measurement difficulties discussed above. I end with some reflections on what a realism means in this context as well as the kind of evidence that might decide such realism.

Statistical Learning Theory and the Problem of Induction

One “easier” form of the problem of induction questions our ability to pick out true regularities in nature, using limited data, with the assumption that such regularities do exist. Harman and Kulkarni (2012) take this problem to be a challenge on our ability to identify precise conditions under which the method of picking hypotheses based on limited datasets is or is not reliable. They identify an influential result from statistical learning theory, hereafter referred to as the VC theorem (Vapnik and Chervonenkis, 2015), which states that, under the condition that the starting hypotheses set has finite VC dimension, the hypothesis chosen from it converges to the true regularity as the size of the dataset goes to infinity.

This result seems to provide us with a condition (i.e., having finite VC dimension) under which a method (i.e., choosing a hypothesis based on its performance over data), is reliable. Indeed, Harman and Kulkarni take this result to be an answer to the form of the problem of induction they have identified. This paper examines this claim. By discussing the details of how VC theorem may be construed as an answer and the connection between VC theorem in statistical learning theory and the NIP property in model theory, I conclude that the VC theorem cannot give us the kind of general answers needed for Harman and Kulkarni’s response to the problem of induction.

A shorter version of the draft that was presented in the 2018 PSA meeting can be found here. The paper has not been published.

This project was completed under the supervision of Sean Walsh, currently at UCLA.

Intuitionistic Probabilism in Epistemology

This paper examines the plausibility of a thesis of probabilism that is based on intuitionistic logic and exposits the difficulties faced by such a program. The paper starts by motivating intuitionistic logic as the logic of investigation along a similar reasoning as Bayesian epistemology. It then considers two existing axiom systems for intuitionistic probability functions — that of Weatherson (2003) and of Roeper and Leblanc (1999) — and discusses the relationship between the two. It will be shown that a natural adaptation of an accuracy argument in the style of Joyce (1998) and de Finetti (1974) to these systems fails. The paper concludes with some philosophical reflections on the results.

The paper has not been published. You can read a draft of it here. It was presented at the 2018 Philosophy of Logic, Mathematics, and Physics Graduate Conference (LMP) at the University of Western Ontario.

This project was completed under the supervision of Simon Huttegger, UC-Irvine.