Faculty Organizers

Jenna Burrell

Associate Professor, School of Information

Jenna Burrell

Jenna Burrell is an Associate Professor in the School of Information at UC Berkeley. She has a PhD in Sociology from the London School of Economics. Before pursuing her PhD, she was an Application Concept Developer in the People and Practices Research Group at Intel Corporation. Broadly, her research is concerned with the new challenges and opportunities of digital connectivity among marginalized populations. Her most recent research topics include (1) fairness and transparency in algorithmic classification and (2) Internet connectivity issues in rural parts of the USA.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am concerned with how trends toward social classification via algorithms can lead to divergent consequences for majority and minority groups. How do these new mechanisms of classification impact the life circumstances of individuals or reshape their potential for social mobility? To what extent can problems of algorithmic opacity and fairness be addressed by technical solutions? What are the limits of a technical fix for unfairness? What other tools or methods are available for addressing opacity or discrimination in algorithmic classification?

Domain of Application: Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning

Deirdre K. Mulligan

Associate Professor, School of Information

Deirdre K. Mulligan

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and an affiliated faculty on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. Mulligan recently chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.

Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Values in design; governance of technology and governance through technology to support human rights/civil liberties; administrative law.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), discrimination, privacy, cybersecurity, regulation generally.

Faculty

Joshua Blumenstock

Assistant Professor, School of Information

Joshua Blumenstock

Joshua Blumenstock is an Assistant Professor at the U.C. Berkeley School of Information, and the Director of the Data-Intensive Development Lab. His research lies at the intersection of machine learning and development economics, and focuses on using novel data and methods to better understand the causes and consequences of global poverty. At Berkeley, Joshua teaches courses in machine learning and data-intensive development. Previously, Joshua was on the faculty at the University of Washington, where he founded and co-directed the Data Science and Analytics Lab, and led the school’s Data for Social Good initiative. He has a Ph.D. in Information Science and a M.A. in Economics from U.C. Berkeley, and Bachelor’s degrees in Computer Science and Physics from Wesleyan University. He is a recipient of the Intel Faculty Early Career Honor, a Gates Millennium Grand Challenge award, a Google Faculty Research Award, and is a former fellow of the Thomas J. Watson Foundation and the Harvard Institutes of Medicine.

Domains of Application: Credit, Fraud/Spam, Network Security, General Machine Learning (not domain specific), Economics

Marion Fourcade

Professor, Sociology

Marion Fourcade

I am a Professor of Sociology at UC Berkeley and an associate fellow of the Max Plack-Sciences Po Center on Coping with Instability in Market Societies (Maxpo). A comparative sociologist by training and taste, I am interested in national variations in knowledge and practice. My first book, Economists and Societies (Princeton University Press 2009), explored the distinctive character of the discipline and profession of economics in three countries. A second book, The Ordinal Society (with Kieran Healy), is under contract. This book investigates new forms of social stratification and morality in the digital economy. Other recent research focuses on the valuation of nature in comparative perspective; the moral regulation of states; the comparative study of political organization (with Evan Schofer and Brian Lande); the microsociology of courtroom exchanges (with Roi Livne); the sociology of economics, with Etienne Ollion and Yann Algan, and with Rakesh Khurana; the politics of wine classifications in France and the United States (with Rebecca Elliott and Olivier Jacquet).

Domain of Application: Credit, General Machine Learning (not domain specific), Health, Employment/Hiring.

Moritz Hardt

Assistant Professor, Electrical Engineering and Computer Science

Moritz Hardt

My mission is to build theory and tools that make the practice of machine learning across science and industry more robust, reliable, and aligned with societal values.

Domain of Application: General Machine Learning

Sonia Katyal

Distinguished Haas Professor, School of Law

Sonia Katyal

Professor Sonia Katyal’s award winning scholarly work focuses on the intersection of technology, intellectual property, and civil rights (including antidiscrimination, privacy, and freedom of speech).

Prof. Katyal’s current projects focus on the intersection between internet access and civil/human rights, with a special emphasis on the right to information; artificial intelligence and discrimination; trademarks and advertising; source code and the impact of trade secrecy; and a variety of projects on the intersection between gender and the commons. As a member of the university-wide Haas LGBT Cluster, Professor Katyal also works on matters regarding law, gender and sexuality.

Professor Katyal’s recent publications include The Numerus Clausus of Sex, in the University of Chicago Law Review; Technoheritage, in the California Law Review; Rethinking Private Accountability in the Age of Artificial Intelligence, in the UCLA Law Review; The Paradox of Source Code Secrecy, in the Cornell Law Review (forthcoming); Transparenthood in the Michigan Law Review (with Ilona Turner) (forthcoming); and Platform Law and the Brand Enterprise in the Berkeley Journal of Law and Technology (with Leah Chan Grinvald).

Katyal’s past projects have studied the relationship between informational privacy and copyright enforcement; the impact of advertising, branding and trademarks on freedom of expression; and issues relating to art and cultural property, focusing on new technologies and the role of museums in the United States and abroad.

Professor Katyal is the co-author of Property Outlaws (Yale University Press, 2010) (with Eduardo M. Peñalver), which studies the intersection between civil disobedience and innovation in property and intellectual property frameworks. Professor Katyal has won several awards for her work, including an honorable mention in the American Association of Law Schools Scholarly Papers Competition, a Yale Cybercrime Award, and twice received a Dukeminier Award from the Williams Project at UCLA for her writing on gender and sexuality.

She has previously published with a variety of law reviews, including the Yale Law Journal, the University of Pennsylvania Law Review, Washington Law Review, Texas Law Review, and the UCLA Law Review, in addition to a variety of other publications, including the New York Times, the Brooklyn Rail, Washington Post, CNN, Boston Globe’s Ideas section, Los Angeles Times, Slate, Findlaw, and the National Law Journal. Katyal is also the first law professor to receive a grant through The Creative Capital/ Warhol Foundation for her forthcoming book, Contrabrand, which studies the relationship between art, advertising and trademark and copyright law.

In March of 2016, Katyal was selected by U.S. Commerce Secretary Penny Pritzker to be part of the inaugural U.S. Commerce Department’s Digital Economy Board of Advisors. Katyal also serves as an Affiliate Scholar at Stanford Law’s Center for Internet and Society, and is a founding advisor to the Women in Technology Law organization. She also serves on the Executive Committee for the Berkeley Center for New Media (BCNM), on the Advisory Board for Media Studies at UC Berkeley, and on the Advisory Board of the CITRIS Policy Lab.

Before entering academia, Professor Katyal was an associate specializing in intellectual property litigation in the San Francisco office of Covington & Burling. Professor Katyal also clerked for the Honorable Carlos Moreno (later a California Supreme Court Justice) in the Central District of California and the Honorable Dorothy Nelson in the U.S. Court of Appeals for the Ninth Circuit.

Shreeharsh Kelkar

Lecturer, Interdisciplinary Studies

Shreeharsh Kelkar

I study computing infrastructures and their relationship to work, labor, and expertise.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My new project tries to understand the tensions in data science between domain expertise and machine learning; this is an issue that is salient to the question of opacity and interpretability.

Domain of Application: General Machine Learning (not domain specific), Health, Employment/Hiring, Education.

Niloufar Salehi

Assistant Professor, School of Information

Niloufar Salehi

Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley. Her research interests are in social computing, technologically mediated collective action, digital labor, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.

Her current project looks at affect recognition used in automated hiring from a fairness and social justice perspective.

Domains of Application: employment/hiring, content filtering algorithms (e.g. YouTube, Facebook, Twitter)

Morgan G. Ames

Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society

Morgan G. Ames

Morgan G. Ames is an Assistant Adjunct Professor in the School of Information and Interim Associate Director of Research for the Center for Science, Technology, Medicine, and Society at the University of California, Berkeley. Morgan’s research explores the role of utopianism in the technology world, and the imaginary of the “technical child” as fertile ground for this utopianism. Based on eight years of archival and ethnographic research, she is finishing a book manuscript on One Laptop per Child which explores the motivations behind the project and the cultural politics of a model site in Paraguay.

Morgan was previously a postdoctoral researcher at the Intel Science and Technology Center for Social Computing at the University of California, Irvine, working with Paul Dourish. Morgan’s PhD is in communication (with a minor in anthropology) from Stanford, where her dissertation won the Nathan Maccoby Outstanding Dissertation Award in 2013. She also has a B.A. in computer science and M.S. in information science, both from the University of California, Berkeley. See http://bio.morganya.org for more.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Machine learning, particular deep neural networks, have become subjects of intense utopianism and dystopianism in the popular press. Alongside this rhetoric, scholars have been finding that these new machine learning techniques are not and likely will never be bias-free. I am interested in exploring both of these topics and how they interconnect.

Domain of Application: General Machine Learning (not domain specific).

Senior Researchers

Michael Carl Tschantz

Senior Researcher, International Computer Science Institute

Michael Carl Tschantz

Michael Carl Tschantz received a Ph.D. from Carnegie Mellon University in 2012 and a Sc.B. from Brown University in 2005, both in Computer Science. Before becoming a researcher at the International Computer Science Institute in 2014, he did two years of postdoctoral research at the University of California, Berkeley. He uses the models of artificial intelligence and statistics to solve the problems of privacy and security. His interests also include experimental design, formal methods, and logics. His current research includes automating information flow experiments, circumventing censorship, and securing machine learning. His dissertation formalized and operationalized what it means to use information for a purpose.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
My prior work has looked at detecting discrimination in online advertising. My ongoing work is looking at how people understand mathematical models of discrimination.

Domains of Application: General machine learning (not domain specific), Advertising

Andrew Smart

Researcher, Google

Andrew Smart

Andrew Smart is a researcher at Google in the Trust & Safety organization, working on algorithmic fairness, opacity and accountability. His research at Google is on internal ethical audits of machine learning systems, causality in machine learning and understanding structural vulnerability in society. His background is in philosophy, anthropology and cognitive neuroscience. He worked on the neuroscience of language at NYU. He was then a research scientist at Honeywell Aerospace working on machine learning for neurotechnology as well as aviation human factors. He was a Principal Research Scientist at Novartis in Basel, Switzterland working on connected medical devices and machine learning in clinical applications. Prior to joining Google he was a researcher at Twitter working on misinformation.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
Currently I am very interested in the scientific and epistemic foundations of machine learning and why we believe what algorithms say at all. Is there a philosophy of science of machine learning? What kind of knowledge does machine learning produce? Is it reliable? Is it scientific? I am also very worried about structural inequality and what impacts the introduction of massive-scale algorithms has on our stratified society. So far the evidence indicates that, in general, algorithms are entrenching unjust social systems and hierarchies. Instead, can we use machine learning to help dismantle oppressive social systems?

Domains of Application General machine learning (not domain specific)

Jessica Cussins Newman

Research Fellow, UC Berkeley Center for Long-Term Cybersecurity

Jessica Cussins Newman

Jessica Cussins Newman is a Research Fellow at the UC Berkeley Center for Long-Term Cybersecurity, where she focuses on digital governance and the security implications of artificial intelligence. In her spare time, Jessica works as a consultant on AI Policy for the Future of Life Institute and The Future Society. Jessica was a 2016-17 International and Global Affairs Student Fellow at Harvard’s Belfer Center, and has held research positions with Harvard’s Program on Science, Technology & Society, the Institute for the Future, and the Center for Genetics and Society. Jessica received her master’s degree in public policy from the Harvard Kennedy School and her bachelor’s in anthropology from the University of California, Berkeley with highest distinction honors. She has published widely on the implications of emerging technologies in numerous outlets, including The Los Angeles Times, The Hill, The Pharmaceutical Journal, Huffington Post, and CNBC.

Joshua Kroll

Assistant Professor of Computer Science at the Naval Postgraduate School

Joshua Kroll

Joshua Kroll is an Assistant Professor of Computer Science at the Naval Postgraduate School, studying the relationship between governance, public policy, and computer systems. Joshua was previously a member of AFOG as a postdoctoral research scholar in the UC Berkeley School of Information. His research focuses on how technology fits within a human-driven, normative context and how to operationalize systems which are supportive of human values in a reliable way. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper “Accountable Algorithms” in the University of Pennsylvania Law Review received the Future of Privacy Forum’s Privacy Papers for Policymakers Award in 2017.

Joshua’s previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?

I am interested in how we can be sure we’re building concrete systems that support abstract ideals such as fairness, privacy, or security and particularly in developing processes which reliably govern such systems so that they perform as desired. That is, I’m interested in how we bring the ideas discussed at AFOG into practice in repeatable ways.

Domain of Application: Credit, Criminal Justice, Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning (not domain specific), Health, Employment/Hiring, Housing, Political/redistricting.

Stuart Geiger

Staff Ethnographer & Principal Investigator, Berkeley Institute for Data Science

Stuart Geiger

Stuart Geiger is a Staff Ethnographer & Principal Investigator at the Berkeley Institute for Data Science at UC-Berkeley, where he studies various topics about the infrastructures and institutions that support the production of knowledge. His Ph.D research at the UC-Berkeley School of Information investigated the role of automation in the governance and operation of Wikipedia and Twitter. He has studied topics including moderation and quality control processes, human-in-the-loop decision making, newcomer socialization, cooperation and conflict around automation, the roles of support staff and technicians, and bias, diversity, and inclusion. He uses ethnographic, historical, qualitative, quantitative, and computational methods in his research, which is grounded in the fields of Computer-Supported Cooperative Work, Science and Technology Studies, and communication and new media studies.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study how people design, develop, deploy, understand, negotiate, contest, maintain, and repair algorithmic systems within communities of knowledge production. Most of the communities I study — including Wikipedia and the scientific reproducibility / open science movement — have strong normative commitments to openness and transparency. I study how these communities are using (and not using) various technologies and practices around automation, including various forms of machine learning, collaboratively-curated training data sets, data-driven decision-making processes, human-in-the-loop mechanisms, documentation tools and practices, code and data repositories, auditing frameworks, containerization, and interactive notebooks.

Domain of Application: Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Education.

Madeleine Clare Elish

Program Director, Data & Society Research Institute

Madeleine Clare Elish

Madeleine leads the AI on the Ground Initiative at Data & Society, where she and her team investigate the promises and risks of integrating AI technologies into society. Through human-centered and ethnographic research, AI on the Ground sheds light on the consequences of deploying AI systems beyond the research lab, examining who benefits, who is harmed, and who is accountable. The initiative’s work has focused on how organizations grapple with the challenges and opportunities of AI, from changing work practices and responsibilities to new ethics practices and forms of AI governance.

As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an S.M. in Comparative Media Studies from MIT.

Domains of Application Employment/Hiring, Information Search & Filtering, General Machine Learning (not domain specific), Health/Medicine, Law/Policy, Scholarship (digital humanities, computational social sci), Agriculture

Postdoctoral Scholars

Rebecca C. Fan

Visiting Scholar, UC Berkeley Center for Science, Technology, Medicine, & Society (CSTMS)

Rebecca C. Fan

Rebecca Fan is a social scientist with an interdisciplinary background (anthropology, international human rights law and politics, socio-legal studies). She is currently a visiting scholar at UC Berkeley’s Center for Science, Technology, Medicine, & Society (CSTMS). Prior to completing her PhD, she worked for a number of human rights organizations (e.g., Amnesty International) and contributed to advocacy work at regional and global forums. Her dissertation brings together fieldwork at the United Nations and participatory action research to investigate what she identifies as the epistemological struggle of governance via regime analysis and institutional studies. Continuing her engagement with global civil society, she currently serves on the Commission on Environmental Economic and Social Policy (CEESP, which is one of the 6 Commissions of the International Union for Conservation of Nature) as a contributing member. When time permits, she plays Indonesian gamelan music and enjoys hiking and floral arrangements.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’ My interest in the subject arises from my concerns about the socio-technical phenomenon of late that has the sort of double-edged sword effect that needs to be better articulated and addressed: e.g. 1) how it is simultaneously empowering and dis-empowering for some but not others; 2) how we are actually getting rich information but poor data; or 3) how we tend to trust the machine to be objective but only to see how human prejudices can be amplified by the machine taught/designed by human. Furthermore, algorithms often live in a black box that is likely to be proprietary. As such, it’s difficult to monitor or evaluate it for accountability or fairness. It also blinds us from seeing the struggle of power asymmetry clearly.

These are some of the issues that occupied my thoughts, to name a few, that will continue to shape the work-in-progress that I am developing now.”

Domain of application: General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci)

Kweku Opoku-Agyemang

Postdoctoral Research Fellow, Center for Effective Global Action, Department of Agricultural and Resource Economics

Kweku Opoku-Agyemang

Kweku Opoku-Agyemang is an (honorary) postdoctoral research fellow with the Center for Effective Global Action at UC Berkeley. His research interests span development economics, industrial organization, research transparency, ethics and technological impacts. He is an author of the book Encountering Poverty: Thinking and Acting in an Unequal World, published by the University of California Press in 2016. He was previously a Research Associate in Human-Computer Interaction and Social Computing at Cornell Tech.

My research focuses on how social science research can become more transparent with the aid of computational tools and on relevant challenges. I am also interested in the causes and consequences of algorithmic bias in the both developed and developing countries as well as the potential role of industrial organization in promoting algorithmic fairness in firms that focus on artificial intelligence.

Domains of Application: Credit, Criminal Justice, Information Search & Filtering, General Machine Learning, Health, Employment/Hiring, Scholarship, Education

Brandie Nonnecke

Postdoctoral Scholar, Research & Development Manager, CITRIS & the Banatao Institute

Brandie Nonnecke

Dr. Brandie Nonnecke is the Research & Development Manager for CITRIS, UC Berkeley and Program Director for CITRIS, UC Davis. Brandie researches the dynamic interconnections between law, policy, and emerging technologies. She studies the influence of non-binding, multi-stakeholder policy networks on stakeholder participation in internet governance and information and communication technology (ICT) policymaking. Her current research and publications can be found at nonnecke.com

She investigates how ICTs can be used as tools to support civic participation, to improve governance and accountability, and to foster economic and social development. In this capacity, she designs and deploys participatory evaluation platforms that utilize statistical models and collaborative filtering to tap into collective intelligence and reveal novel insights (See Projects), including the California Report Card launched in collaboration with the Office of California Lt. Gov. Gavin Newsom and the DevCAFE system launched in Mexico, Uganda, and the Philippines to enable participatory evaluation of the effectiveness of development interventions.

Brandie received her Ph.D. in Mass Communications from The Pennsylvania State University. She is a Fellow at the World Economic Forum where she serves on the Council on the Future of the Digital Economy and Society and is chair of the Internet Society SF Chapter Working Group on Internet Governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I conduct research on the benefits and risks of algorithmic-based decision-making, including recommendations on how to better ensure fairness, accountability, and positive socioeconomic inclusion. This research is available at http://citris-uc.org/connected-communities/project/inclusive-ai-technology-policy-diverse-urban-future/ and through the World Economic Forum at https://www.weforum.org/agenda/2017/09/applying-ai-to-enable-an-equitable-digital-economy-and-society

Domain of Application: General Machine Learning (not domain specific), Policy and governance of AI.

David Platzer

Research Fellow, Berkeley Center for New Media

David Platzer

David Platzer is a recent graduate of the anthropology program at Johns Hopkins and is currently a Berggruen Institute Transformations of the Human research fellow at UC Berkley’s Center for New Media. His dissertation research focused on neurodiversity employment initiatives in the tech industry, while his current research investigates the existential risk movement in its intersection with value-alignment in AI development.

Dan Sholler

Postdoctoral Scholar, r OpenSci at the Berkeley Institute for Data Science

Dan Sholler

I study the occupational, organizational, and institutional implications of technological change using qualitative, ethnographic techniques. For example, I studied the implementation of federally-mandated electronic medical records in the United States healthcare industry and found that unwanted changes in the day-to-day activities of doctors influenced a national resistance movement, ultimately leading to the revision of federal technology policy. Currently, I am conducting a comparative study of the ongoing shifts toward open science in the ecology and astronomy disciplines to identify and explain the factors that may influence engagement and resistance with open science tools and communities.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in discussing and studying how the implementation of AI and other algorithmic applications might impact the day-to-day activities of workers and alter the structures of organizations. In particular, I would like to interrogate how AI-led changes might influence workers’ perceptions of what it means to be a member of an occupational or professional community and how the designers and implementers of algorithmic technologies consider these potential implications.

Domain of Application: Health, Scientific Research, Open Science (open source software, open data, open access).

.

Graduate Students

Shazeda Ahmed

PhD Candidate, School of Information

Shazeda Ahmed

Shazeda is a third-year Ph.D. student at the I School. She has worked as a researcher for the Council on Foreign Relations, Asia Society, the U.S. Naval War College, Citizen Lab, Ranking Digital Rights, and the Mercator Institute for China Studies. Her research focuses on China’s social credit system, information technology policy, and role in setting norms of global Internet governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study China’s social credit system, which uses troves of Chinese citizens’ personal and behavioral data to assign them scores meant to reflect how “trustworthy,” law-abiding, and financially responsible they are. The algorithms used to calculate these scores are classified as either trade or state secrets, and to date it seems that score issuers cannot fully explain score breakdowns to users. There are plans to identify low scorers on public blacklists, which could discriminate against people who are unaware of how the system operates. Through my research I hope to discover how average users perceive and are navigating the system as it develops.

Domain of Application: Credit.

Sofia Dewar

Master's student MIMS '20, School of Information

Sofia Dewar

Sofia is a first-year graduate student in the Master of Information Management & Systems program, with a focus in human-computer interaction. Her current research interests include affective computing, facial recognition, and workplace surveillance. Prior to pursuing graduate education, Sofia worked in technical operations and user research at Google. She received her B.A. in Cognitive Science, also from UC Berkeley, in 2015.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’
How do we evaluate fairness and create transparency when new technologies such as facial recognition and emotion detection are introduced into automated hiring?

Domains of Application: Employment/Hiring

Marc Faddoul

Master's student MIMS '19, School of Information

Marc Faddoul

After an MS in Data Science, I came to the ISchool to pursue transdisplinary interests related to information technologies. My research focusses on computational propaganda and algorithmic fairness.

Youtube said they would recommend less conspiratorial content through their Autoplay algorithm. Can they be held accountable for this, despite the opacity of their system? One of my project is to measure if this trend is actually changing.

I am also in the process of publishing a paper on the limits and potential mitigations of the PSA, a software used for pre-trial risk-assessement. Fairness and transparency are at the core of the value tussles.

Domains of Application: Criminal Justice, Information Search & Filtering, General Machine Learning

Thomas Krendl Gilbert

PhD Candidate, Machine Ethics and Epistemology

Thomas Krendl Gilbert

I am an interdisciplinary candidate at UC Berkeley, affiliated with the Center for Human-Compatible AI. My prior training in philosophy, sociology, and political theory has led me to study the various technical and organizational dilemmas that emerge when experts use machine learning to aid decision-making. In my spare time I enjoy sailing and creative writing.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in how different algorithmic learning procedures (e.g. reinforcement learning) reframe classical ethical questions and recall the original problems of political economy, such as aggregating human values and preferences. This necessarily includes asking what we mean by “explainable” AI, what it means for machine learning to be “fair” when enmeshed with institutional practices, and how new forms of social autonomy are made possible through automation.

Domain of Application: Credit, Criminal Justice, General Machine Learning (not domain specific), Housing, Education, Measurement

Daniel Griffin

PhD Candidate, School of Information

Daniel Griffin

Daniel Griffin is a doctoral student at the School of Information at UC Berkeley. His research interests center on intersections between information and values and power, looking at freedom and control in information systems. He is a co-director of UC Berkeley’s Center for Technology, Society & Policy and a commissioner on the City of Berkeley’s Disaster and Fire Safety Commission. Prior to entering the doctoral program, he completed the Master of Information Management and Systems program, also at the School of Information. Before graduate school he served as an intelligence analyst in the US Army. As an undergraduate, he studied philosophy at Whitworth University.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? In what some have called an age of disinformation, how, and with what effects, do people using search engines imagine and interact with the search engine algorithms? How do the teams of people at search engines seek to understand and satisfy the goals and behavior of people using their services? What sort of normative claims does, and possibly can, society make of the design of the search engine algorithms and services?

Domain of Application: Information Search & Filtering.

Anne Jonas

PhD Candidate, School of Information

Anne Jonas

After previously working in program management at the Participatory Culture Foundation and the Barnard Center for Research on Women, I now study education, information systems, culture, and inequality here at the I School. I am a Fellow with the Center for Technology, Society, and Policy and a Research Grantee of the Center for Long-Term Cybersecurity on several collaborative projects.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Use of algorithms in educational curriculum provision, assessment, evaluation, surveillance and discipline. Also working on a project related to “regional discrimination” that looks at how geographic markers are used to block people from certain websites and web based services.

Domain of Application: Criminal Justice, Information Search & Filtering, Employment/Hiring, Education.

Zoe Kahn

PhD student, School of Information

Zoe Kahn

Zoe Kahn is a PhD student at the UC Berkeley School of Information where she collaborates with data scientists, computer scientists, and designers to understand how technologies impact people and society, with a particular interest in AI and ethics, algorithmic decision making, and responsible innovation. As a qualitative researcher, Zoe ask questions of people and data that surface rich and actionable insights. Zoe brings an interdisciplinary background to her work that blends sociology, technology, law, and policy. She received her B.A. summa cum laude in Sociology from New York University in 2014. She is joint-fellow at the Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.

Zoe’s current project with four MIMS students, Coordinated Entry System Research and Development for a Continuum of Care in Northern California, is jointly-funded by the Center for Center for Technology, Society and Policy, Center for Long-Term Cyber Security, and Algorithmic Fairness and Opacity Working Group at UC Berkeley.

Domains of Application: Housing

Nitin Kohli

PhD Candidate, School of Information

Nitin Kohli

Nitin Kohli is a PhD student at UC Berkeley’s School of Information, working under Deirdre Mulligan. His research examines privacy, security, and fairness in algorithmic systems from technical and legal perspectives. On the technical side, Nitin employs theoretical and computational techniques to construct algorithmic mechanisms with such properties. On the legal side, Nitin explores institutional and organizational mechanisms to protect these values by examining the incentive structures and power dynamics that govern these environments. His work draws upon mathematics, statistics, computer science, economics, and law.

Prior to his PhD work, Nitin worked both as a data scientist in industry and as an academic. Within industry, Nitin developed machine learning and natural language processing algorithms to identify occurrences and locations of future risk in healthcare settings. Within academia, Nitin worked as an adjunct instructor and as a summer lecturer at UC Berkeley, teaching introductory and advanced courses in probability, statistics, and game theory. Nitin holds a Master’s degree in Information and Data Science from Berkeley’s School of Information, and a Bachelor’s degree in Mathematics and Statistics, where he received departmental honors in statistics for his work in stochastic modeling and game theory.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My research interests explicitly are in the construction algorithms that preserve certain human values, such as fairness and privacy. I’m also interested in legal and policy solutions that promote and incentivize transparency and fairness within algorithmic decision making.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), Employment/Hiring, Scholarship (digital humanities, computational social sci), Education.

Emanuel Moss

PhD candidate, Department of Anthropology / CUNY Graduate Center

Emanuel Moss

Emanuel Moss researches issues of fairness and accountability in machine learning and is a research assistant on the Pervasive Data Ethics for Computational Research (PERVADE) project at the Data & Society Research Institute. He is a doctoral candidate in cultural anthropology at the CUNY Graduate Center, where he is studying the work of data science from an ethnographic perspective and the role of data scientists as producers of knowledge. He is particularly interested in how data science is shaped by technological, economic, and ethical constraints and concerns.

Emanuel holds a B.A. from the University of Illinois and an M.A. from Brandeis University. He has previously worked as a digital and spatial information specialist for archaeological and environmental projects in the United States and Turkey.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’
My research shows how fairness, transparency, and bias have emerged as matters of concern for computer scientists and policy makers? How have professional practices shifted in relation to these concerns, and how of computational understandings of these concepts influenced broader discourses around fairness, equity, and justice?

Domains of Application: General machine learning

Angela Okune

PhD Student, Anthropology, UC Irvine

Angela Okune

Angela is a doctoral student in the Anthropology Department at the University of California, Irvine (UCI) working on questions of expertise and the politics of knowledge production in technology & development in Africa. Angela is a recipient of a 2016 Graduate Research Fellowship from the National Science Foundation. From 2010 – 2015, as a co-founder of the research department at iHub, Nairobi’s innovation hub for the tech community, Angela provided strategic guidance for the growth of tech research in Kenya.

How do your research interests relate to the topics of ‘algorithmic opacity/transparency’ and/or ‘fairness?’ An unprecedented amount of digital information is collected, stored, and analyzed the world over to predict what people do, think, and buy. But what epistemological standpoints are assumed in the design and application of algorithmic technologies that structure everyday interactions, digital and otherwise? I am interested in how seemingly contradictory notions of scale, standardization and personalization are simultaneously leveraged in promises of algorithmic technologies and what the implementation of AI in various contexts across the African continent reveals.

Domain of application: General Machine Learning (not domain specific), Scholarship (digital humanities, computational social science)

Andrew Chong

PhD Student, School of Information

Andrew Chong

Andrew Chong is a PhD student at the UC Berkeley School of Information, where his research focuses on how the use of algorithms influences market competition and outcomes. Previously, Andrew worked at the National Bureau of Economic Research examining the impact of behavioral interventions in healthcare and consumer finance. He also has experience developing and implementing pricing models for prescription health insurance (PicWell), and developing dashboards for city governments (with Accela and Civic Insight).

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
I am interested in the increasing role algorithms and firm-controlled marketplaces play in economic life, and their wider implications for fairness, efficiency and competition.

Domains of Application General Machine Learning (not domain specific), Law/Policy, Scholarship (digital humanities, computational social sci), Online Markets, Algorithmic Pricing

Ji Su Yoo

PhD Student, School of Information

Ji Su Yoo

Amy Turner

MASTER'S STUDENT MIMS '20, SCHOOL OF INFORMATION

Amy Turner

Amy is a second year Master’s student at UC Berkeley’s School of Information where she is focusing on user experience research and human-centered design. Amy uses qualitative research to understand how technology can be designed to support the values of those using it and those who are affected by it. She has worked with users spanning many industries including nonprofits, software support managers, medical staff, privacy and security experts, and homeless shelter staff and outreach. She graduated summa sum laude in Psychology from University of Colorado Boulder.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness?
My Master’s research focuses on how algorithmic transparency, through model explanations, calibrates trust for AI systems. For example, people often over-trust a system they think is smarter than it is, or they may not trust it enough to truly benefit from it. What information about the system is critical in helping people decide when to trust the system versus when to rely on their own knowledge?

Domains of Application General Machine Learning (not domain specific), Trust and Transparency, Housing

AFOG Alumni

Amit Elazari Bar On, Director, Global Cybersecurity Policy, Intel Corporation

Sarah M. Brown, Data Science Postdoctoral Research Associate, Brown University

Michelle R. Carney, UX Researcher, Machine Learning + AI, Google

Roel I.J. Dobbe, Postdoctoral Researcher, AI Now Institute, New York University

Jen Gong, Postdoctoral Scholar, Center for Clinical Informatics and Improvement Research (CLIIR), UCSF

Randi Heinrichs, PhD student, Leuphana University Luneburg in Germany

Abigail Jacobs, Assistant Professor of Information and Complex Systems, School of Information and College of Literature, Science, and the Arts (dual appointment), University of Michigan

Daniel Kluttz, Senior Program Manager, Microsoft

Sam Meyer, Product Manager, StreamSets

Elif Sert, Research Affiliate, Berkman Klein Center for Internet & Society, Harvard University; Researcher, UC Berkeley Department of Sociology

Benjamin Shestakofsky, Assistant Professor, Department of Sociology, University of Pennsylvania