Publications

2020 Publications

Geiger, R. Stuart, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. Garbage in, garbage out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’20), 2020. [PDF]

 

2019 Publications

Andrus, McKane and Thomas Krendl Gilbert. 2019. Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. In proceedings of the AAAI/ACM conference on Artificial Intelligence, Ethics and Society. Honolulu, HI.  [PDF]

 

Burrell, Jenna, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

Jonas, Anne and Jenna Burrell. 2019. Friction, Snake Oil, and Weird Countries: Cybersecurity Systems Could Deepen Global Inequality through Regional Blocking. Big Data & Society, 6, 1. Jan 2019. [PDF]

Kluttz, Daniel and Deirdre K. Mulligan. 2019. Automated Decision Support Technologies and the Legal Profession. Berkeley Technology Law Journal. 

Mulligan, K. Deirdre., Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

 

Mulligan, K. Deirdre, Daniel Kluttz, and Nitin Kohli. 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Draft available at SSRN: http://dx.doi.org/10.2139/ssrn.3311894

Wu, Eva Yiwei, Emily Pedersen, Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. In proceedings of the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) conference. Austin, TX. [PDF]

2018 Publications

Dobbe, Roel, Sarah Dean, Thomas Gilbert, and Nitin Kohli (2018) A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. Presented at the 2018 Workshop on Fairness, Accountability and Transparency in Machine Learning during ICML 2018. Stockholm, Sweden. 

Mulligan, Deirdre K. and Daniel S. Griffin. Rescripting Search to Respect the Right to Truth (August 8, 2019). 2 GEO. L. TECH. REV. 557 (2018). [PDF

 

Projects

2020 Projects co-sponsored with the Center for Technology Society and Policy (CTSP)

 

Algorithmic Fairness in Mediated Markets

Fellows: Andrew Chong and Emma Lurie

Online marketplaces, where firms like Uber and Amazon control the terms of economic interaction, exert an increasing influence on economic life. Algorithms on these platforms are drawing greater scrutiny, whether in how different price and quality characteristics are determined for different users, the end outcomes algorithms optimize for, and ultimately, how surplus created by these networks is allocated between buyers, sellers, and the platform. This project undertakes a systematic survey of perceptions on fairness among riders and drivers in ride-sharing marketplaces. We seek to carefully catalogue different notions of fairness among different groups, examining where they might cohere and where they might be in tension. We explore obligations platform firms might have as custodians of market information and arbiters of market choice and structure, to contribute to developing pubic debate on what a “just” algorithmic regime might resemble for online marketplaces.

An alternate lexicon for AI 

Fellows: Noura Howell and Noopur Raval

This project joins the “second wave” of AI scholars in examining structural questions around what constitutes the field of social concerns within current AI and Social Impact research. Under this project, we will map the ethical and social landscape of current AI research and its limits by conducting a critical and comparative content analysis of how social/ethical concerns have been represented over time at leading AI/ML conferences. Based on our findings, we will also develop a draft syllabus on ‘Global and Critical AI’ and will convene a one-day workshop to build vocabulary for such AI thinking and writing. With this project we aim to join the growing community at UC Berkeley and beyond in identifying the dominant techno-imaginaries of AI and Social Impact research, and 2) critically and tactically expanding that field to bring diverse experiential, social, cultural, and political realities beyond the Silicon Valley to bear upon AI thinking. Morgan Ames is also collaborating on this project.

Environmental conservation in the age of algorithms: from data to decisions 

Fellows: Millie Chapman and Caleb Scoville 

While human impacts on the rest of nature accelerate, our techniques of observing those impacts are rapidly outstripping our ability to react to them. Artificial Intelligence (AI) techniques are quickly being adopted in the environmental sphere not only to inform decisions through providing more useful datasets, but also to facilitate more robust decisions about complex natural resource and conservation problems. The onset of decision-making algorithms requires us to urgently ask the question: Whose values are shaping AI decision making systems in natural resource management? In the shadow of this problem, our project seeks to understand the expansion of privately developed but publicly available environmental data and algorithms through a critical study of algorithmic governance. It aims to facilitate an analysis of how governments and nongovernmental entities deploy techniques of algorithmic conservation to aid in collective judgments about our complex and troubled relation to our natural environments. Carl Boettiger is also a collaborator on the project.

State-Firm Coproduction of China’s Social Credit System

Fellows: Shazeda Ahmed

This qualitative dissertation project investigates how the Chinese government and domestic technology companies are collaboratively constructing the country’s social credit system. Through interviews with government officials, tech industry representatives, lawyers, and academics, I argue that China’s government and tech firms rely on and influence one another in their efforts to engineer social trust through incentives of punishment and reward.

 

2019 Projects co-sponsored with the Center for Technology Society and Policy (CTSP)

 

Affect & Facial Recognition in Hiring

Fellows: Sofia Gutierrez-DewarMehtab Khan, and Joyce Lee

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human emotion. Powered by artificial intelligence, emerging applications of affect recognition in the workplace raise pressing ethical and regulatory questions: what happens when an automated understanding of human affect enters the real world, in the form of systems that have life-altering consequences? This is particularly pertinent in the realm of workplace surveillance, with no clear answers about how to address privacy, bias, and discrimination problems. As the underlying technologies are generally proprietary and therefore opaque, their impact can only be assessed with a deeper look into how they are designed and implemented. In collaboration with Coworker.org, a nonprofit that helps people organize for improvements in their jobs and workplace, we thus aim to evaluate applications of affect recognition and the potential risks and implications of these technologies. 

Algorithmic Intermediation and Workplace Surveillance – Emerging Threats to the Democracy of Work 

Fellows: Eric Harris Bernstein, Julia HubbellNandita Sampath, and Matthew Spring

Advanced analytical software is changing the dynamics between workers and their employers, exacerbating the existing power asymmetry. Combined with AI, technologies like facial recognition, email monitoring, and audio recordings can all be analyzed to infer workers’ emotions and behavior to determine facets of worker productivity or whether an employee is, for example, “threatening.” This technology often reinforces racial and gender bias, and little is known about how the results of these analyses affect managerial decisions like promotions and terminations. Not only is this surveillance a huge loss of privacy for employees, but it may also have a negative impact on their stress levels or ability to perform in the workplace. Our project will investigate the different workplace surveillance technologies on the market and their effects on workers, and then provide potential policy responses to these issues.

Coordinated Entry System Research and Development for a Continuum of Care in Northern California (co-sponsored with CLTC)

Fellows: Zoe KahnAmy Turner, Michell ChenMahmoud Hamsho, and Yuval Barash

Governments are increasingly using technology to allocate scarce social service resources, like housing services. In collaboration with a Continuum of Care in Northern California, this project will involve using qualitative research methods (i.e. interviews, participatory design, and usability testing) to conduct a needs assessment and system recommendation around “matching” unhoused people to appropriate services. Our goal is to identify matching systems (or design requirements) that suit the needs of diverse housing service providers across the county without compromising the needs and personal information of vulnerable populations. In addition to efficiency, we will consider how systems handle values such as privacy, security, autonomy, dignity, safety, and resiliency.

 

2018 Projects co-sponsored with the Center for Technology Society and Policy (CTSP)

 

Building tech capacity: Investigating tech philanthropy training and education programs for the “skilling up” of youth

Fellows: Angela Okune, Leah Horgan, and Anne Jonas

From the Bill and Melinda Gates Foundation to the Chan Zuckerberg Initiative (CZI), tech billionaires have undertaken development projects that address poverty, disease, education, global climate change, gender inequality, and other urgent social issues. This project seeks to understand how development is framed as a global “skills problem” through the lens of Silicon Valley logics and characterized as a problem of moral and humanitarian concern in need of technological intervention. This interdisciplinary, collaborative team proposes to understand how implicit, explicit, and sometimes contested desires for “scale”, “standardization”, and “sustainability” inform programming, funding, and evaluation in and of technologically-oriented foundations and firms. The project will leverage ethnographic insights derived from participant observation at relevant events in the Bay area and Los Angeles, in-depth interviews with key stakeholders working on technology and education/training, and textual analysis of artifacts and materials including training manuals, academic rubrics, blog posts, and reports.
 

MLUX SF: Designing and using Data Science Ethically

FellowsMichelle Carney

MLUX (“Machine Learning and User Experience”) is a professional meetup group focused on building a community around the emerging field of human-centered machine learning, meeting in San Francisco for monthly tech talks. We are professional UX Designers and Researchers, Data Scientists, PMs, Developers and everyone in between, and we aim to organize a community that helps foster cooperation, creativity, and learning across the UX and Data Science disciplines. One of the key areas of interest in this field is understanding how to design and use data science effectively and ethically. By partnering with CTSP and AFOG, we are excited to host an event centered around “Designing and using Data Science Ethically,” aimed at Tech professionals to share best practices and lessons learned from the field. MLUX Website: https://www.mluxsf.com/

Unpacking the Black Box of Machine Learning Processes

Fellows: Morgan Ames and Roel Dobbie

Machine learning has undergone a renaissance of methods in the last six years, and is being quietly introduced into nearly every aspect of our daily lives. In many instances, though, this is handled by private companies deploying proprietary software with little oversight. This results in a widespread impression that machine learning is a ‘black box’ with little hope for supervision or regulation.

With this project, we aim to join a growing community of researchers focused on unpacking this black box. First, we seek to map the disconnect between public conceptions and the actual processes of machine learning to illuminate how contemporary machine learning is done. Second, we seek to intervene in the process of defining and tuning machine learning models themselves, using the framework of value-sensitive design as a point of departure, to understand the values-related challenges in the design of machine learning systems.