What is your area of research?
Iam a PhD student at Cornell in Computer Science. My primary focus is on analyzing the social and ethical consequences of algorithms. I work to understand how the rising presence of algorithms in our lives—in such diverse realms as hiring, criminal justice, and online interactions—presents both opportunities and challenges as we strive for a fair and free society. This can include analyzing a particular algorithm used in practice, creating theoretical models for the effects that these algorithms have in the real world, and reasoning about desired properties that we’d like to have going forward.
What inspired you to choose this field of study? How did you end up in such an offbeat, unconventional and unique career?
Many of the algorithms used most frequently today have been around for decades; however, they were primarily designed from a purely theoretical perspective, not necessarily with the intent of using them to make consequential decisions about humans. To me, this is the perfect opportunity to apply my technical knowledge to further the discussions of how these algorithms should be used in practice and what choices we should make in algorithm design in the future. Because much of my work is interdisciplinary, I have the opportunity to meet and work with a diverse set of researchers from a variety of backgrounds, which is really appealing to me.
Why is this research important?
Algorithmic decision-making has become ubiquitous, and its prevalence will only increase in the foreseeable future. A growing line of journalism and scholarly research suggests that in many cases, algorithms don’t naturally adhere to the requirements we’d like to make of them. In some sense, this is a feature of their design—they’re usually optimized toward one specific task, without regard for any other constraints. Nick Bostrom pointed this out in a famous thought experiment in which a hypothetical Artificial Intelligence designed to maximize the production of paper clips ends up destroying humanity in its quest for efficiency. While this may be an extreme example, it has analogues in real life: for instance, an algorithm designed to maximize ad efficiency may decide to not advertise certain jobs to women. The point is that algorithms will generally fail to satisfy normative or legal constraints if we fail to make these explicit.
How has your background influenced your scholarship?
I did my undergrad in Electrical Engineering and Computer Science at UC Berkeley, which puts a strong emphasis on social justice. This has definitely shaped my thinking, moving beyond the purely technical considerations of algorithm design to bring a more multidisciplinary perspective to my research.
What else has influenced your thinking as a researcher or scholar?
In my field, we’re fortunate to have a strong presence of investigative data journalism. Media outlets like ProPublica do an excellent job of bringing to light serious concerns over how algorithms and machine learning affect people’s lives, and this is a major help in designing algorithms with these effects in mind. In the current social and political climate, it seems that more and more people are beginning to question the inequities produced and exacerbated by technology, and this backdrop definitely shapes the way I think about the technical issues we face.
I understand you were recently selected for the 2018-19 Microsoft Ph.D. Fellowship Program. Congratulations! How did you learn about this fellowship and what was the application process like?
I actually had the opportunity to spend the last summer at Microsoft Research in New York City, which is where I first learned of this fellowship. My mentors from Microsoft as well as my department encouraged me to submit a research statement, and I was later asked to interview with researchers from Microsoft as the final step of the application process.
What opportunities will this fellowship provide for you that you perhaps wouldn’t have had access to otherwise?
This fellowship will make it much easier for me to continue research with Microsoft researchers. They have a strong group focused on Fairness, Accountability, Transparency, and Ethics in algorithms and machine learning, and the Microsoft fellowship gives me the opportunity to work with them.
Any advice for other graduate students interested in applying for fellowships or grants?
I’m not sure I have any non-trivial advice here, but what worked for me was to find a group of people who care about the same things I care about (in my case Microsoft). Interviewing with them, it was clear that they prioritize the development of fair and transparent algorithms, which aligns well with my research interests.
Why did you choose Cornell to pursue your degree?
My advisor, Jon Kleinberg, is the biggest reason I chose to come here. On top of being an amazing researcher over a wide range of fields, he’s a wonderful mentor. I’ve learned a lot from his perspective, and he’s made Cornell a great place for me. We also have a strong group of researchers in the Computer Science and Information Science departments who work on a lot of emerging socio-technical environments that relate to my research. Karen Levy and Solon Barocas, in particular, do really exciting work, and I’m fortunate to be able to discuss my research with them and learn from their experience.
What’s next for you?
In many applications, algorithms and humans work together to form joint decision-making systems—for example, many courtrooms around the country use predictive risk assessment tools to inform the judge of the defendant’s potential for future criminality. There are many reasons why you might find this problematic, but our understanding of the effects that these tools have is still limited. We don’t know if judges are deferring to the algorithms, ignoring them altogether, or some mix of the two. This is true in several domains: algorithms provide some information or provide a recommendation to a human, and a human makes the final decision of the outcome in question. In order to make sure these algorithmic tools aren’t having adverse impacts, we need to first understand how humans respond to the information they provide.
A lot of my past research is on behavioral economics, and I want to combine some of that analysis with work on algorithmic fairness. Jon and I recently created and analyzed a theoretical model for implicit bias, and I’m hoping to continue studying how behavioral and human biases impact the way we think about joint human-algorithm decision-making systems. My hope is that a more sound understanding of human behavior will improve our ability to reason about the actual impacts that algorithms have on our lives.