Social Science Research Council Research AMP Just Tech

Regulating Platforms & Speech in an Age of Fake News | Ravi K. Mehrotra Institute for Business, Markets & Society

Boston University Boston, MA, United States

Misinformation has been a problem for all of human history, but is particularly challenging to control on today’s social media platforms. Under the current US Section 230 regulation, internet companies are not held responsible for the content shared on their platforms. Other countries have taken different approaches to regulation. With a shared goal to maintain […]

Report launch: Offensive speech and hate speech targeted at Congressional Candidates in the 2024 Election | Center for Democracy & Technology

Virtual

Women of color candidates continue to be underrepresented in Congress while also facing significant challenges in running for office. One of these challenges includes the kinds of offensive and hate speech they are subject to on social media platforms. In this research briefing, we will share findings from a new study conducted by the Online Violence […]

Can We Rebuild Trust in the Information Environment? | NYU’s Center for Social Media and Politics and the Center for News, Technology & Innovation

New York University New York, NY, United States

For decades, America has seen a steady decline in trust in institutions — in government, science, media, and even the very idea of democracy. The problem stems from several places, such as rising political polarization and declining civic participation. But many place the blame on our increasingly siloed and partisan information environment, which is exacerbated […]

The Campaign to Curb Misinformation Research | Boston University

Virtual

The alarming rise of mis- and disinformation influencing recent elections spurred a flurry of new research to understand the trend. But work among academics, nonprofits and the technology sector made some activists suspicious that a conspiracy was developing to muzzle conservative and right-wing ideas – leading to a campaign to curb research. In these polarized […]

Has the AI Election Threat Materialized? | NYU

Virtual

Since ChatGPT first launched nearly two years ago, many have claimed the rise of AI would pose a significant threat to elections. Reports warned that a surge of AI-generated disinformation could undermine democracy. Intelligence officials worried that foreign actors would use AI to disrupt the electoral process. Americans agreed, with more than half saying AI could impact who will win in November.

Threats to Democracies: A Transatlantic Workshop on Media and the 2024 Elections

Virtual

Join us for “Media Coverage of the 2024 U.S. Presidential Election: The View from Germany and the United States,” a Fireside Chat hosted by the UNC Center for Information, Technology, and Public Life, in partnership with UNC Global Affairs, Thomas Mann House Los Angeles, and the UNC Center for European Studies, and co-sponsored by the UNC School of Information and Library […]

The Future of Third-Party AI Evaluation | CRFM & CITP

Virtual

General-purpose AI systems are now deployed to billions of users, but they pose risks related to bias, fraud, privacy, copyright, CBRN, NCII, and more. To assess these risks, we need independent and community-driven evaluations, audits and red teaming, and responsible disclosure.  Our workshop on the future of third-party AI evaluation dives into these topics with experts on: Third-party evaluations, […]

Session 13: Robyn Caplan: Studying Networked Platform Governance: A Multi-Perspective Approach | PGMT

This talk will be a reflection on Robyn Caplan’s multi-perspective approach to studying platforms, and will highlight the importance of triangulation when it comes to studying networked actors. This talk will give an overview of Caplan’s research (touching on research on platform personnel, media associations, online creators, civil society actors) and will explore the theoretical […]