Expert Reflection

Two Questions on Fair Use: Interview with Nabiha Syed

By · February 25, 2022

Two Questions on Fair Use: Interview with Nabiha Syed

By · February 25, 2022

For this year’s Fair Use/Fair Dealing Week, MediaWell is partnering with the Association of Research Libraries to interview experts reflecting on how fair use supports research, journalism, and truth. This is the fourth and final installment of MediaWell’s four-part series, entitled “Two Questions on Fair Use.” In this interview, we ask Nabiha Syed, president of The Markup, about how the problem of misinformation requires discretion from people looking to exercise fair use,and how fair use supports people who share information independently.The transcript has been lightly edited for clarity.

Can fair use help combat misinformation on social media?

One of the things I love about fair use is that it creates this carve out from a copyright infrastructure that otherwise could pose a lot of downstream restrictions for a creator or a commentator or a person engaging in a critique. That’s the reason why Justice Ginsburg said that fair use doctrine was a built-in free speech safeguard. We want to engage with cultural conversation, with topics of great public interest, and we want to be able to point to them and say, “I’m not sure I agree with that,” or “there’s a detail that’s missing here, or “let me tell you about how my research really supports this in a way that might be counterintuitive.”

What’s tricky about misinformation is that while you can say, “Let me take snippets of the misinformation and rebut it or debunk it, or fact check it,” fair use permits you to do that. But there’s an open question about whether the psychological research actually supports that as a good thing to do.

Fair use clears the way for us to be able to say, “Okay, here is a rumor that’s spreading about how JFK Jr. is going to come back and turn Donald Trump into the president,” which I think was circulating a couple months ago. We can take that snippet because fair use allows us to, and have that clip or take that quote and comment on it, which is all well and good and should happen. But in a journalistic environment, do we want to be recirculating it? Do we want to be amplifying it? These are a really interesting set of questions. I’m not sure it’s the right tactic, but I’m glad that fair use at least gives us the chance to engage.

For researchers who are working on a different time horizon than journalists, the ability to really dig in, to create a historical record of this kind of misinformation, to document how it flows, that wouldn’t be possible without fair use. For that use case, I think it’s tremendously necessary to document this strange time that we’re in, and the many flavors of misinformation. But for the news journalistic context that I operate in, I worry a little bit about just because you can, doesn’t mean you should.

Can you give me an example of a situation where reporting on misinformation has amplified it?

A couple years back, I wrote a paper for The Yale Law Journal about misinformation, and the example I picked up on was the Seth Rich conspiracy. This was a conspiracy that had been percolating in small blogs that Hillary Clinton had something to do with the murder of a DNC staffer, and that there was a massive coverup. This was really a fringe phenomenon. Then we had the WikiLeaks main website and Julian Assange pick it up and re-amplify it. Then we saw local news refer to it as, “this is a thing that’s being said,” which of course is the heart of fair use:  you want to be able to say, “that’s a thing that’s being said. Is it true? Is it not true? Let’s discuss.”

Then you saw bigger blogs picking it up and then Breitbart picked it up, and then it was on Fox News, and all of a sudden it goes from being a fringe comment on a blog somewhere to something that’s in the discourse in quite a public way, and you watch that happen within a 36 hour period of time. Seth Rich’s family ended up filing a defamation lawsuit, and it went on to be quite ugly.

This has something to do with the networked nature of the internet and how seamlessly things move. Your ability to comment on something carries with it the same ability to amplify it, to pluck it from obscurity and bring it into the open in a way that mainstreams a conspiracy theory. This example is not the worst or the only one, but was a pretty terrible one for Seth Rich’s family, enough that they sought recourse over it.

It’s like the dark side of fair use, in a way.

Yes. We think a lot about fair use and its ability to bring things into the light, which is fantastic. But when speech isn’t scarce and we’re on systems that are totally networked it’s worth asking, “should we have the right to do it?” Of course fair use gives us this right, but the discretion is ours about what format that takes and what media we pick. Writing about something in a research article or maybe a long-form journalistic piece where you’re really debunking why this wasn’t possible is very different than a retweet or a two-minute segment on local news, so the media matters.

Bearing this in mind—and I understand there’s no silver bullet, and I certainly wouldn’t expect you to have all the answers—but what sort of remedy or recourse do you think would help to alleviate this situation?

Oh, what a big question. I put a lot of stock in solutions that play around with friction. When we’re talking about amplification, what we’re really talking about is frictionless sharing on platforms that have a financial incentive to allow things to move without friction, so they can monetize it. I think friction offers a really interesting playground to consider solutions. You already see platforms playing with that. When Twitter says, “You sure you want to retweet this or do you want to read it first?” when the platform gives you that little notice, that’s a useful introduction of friction into the process.

Now, I don’t have the data to show what the effect of that looks like at scale yet, because it’s relatively new intervention. But I do think that’s a fascinating terrain because the problem isn’t that we’re talking about something, right. The problem isn’t what fair use allows. The problem is the speed with which it happens, the lack of context that the format permits, the lack of depth that we have. Then also a social predilection for these kinds of conspiracy theories, and that sociological question, I don’t have answers for. Wish I did, wish I did.

Yeah, that would be useful for everybody if you did, but it’s a hard one.

I know, I’ll sleep on it and get back to the world.

How does fair use support people who share information independently?

Without fair use, imagining what the blogosphere would look like is a little terrifying, because it would be screens unconnected to one another. Fair use allows people with a variety of different skill sets to comment on something, which is the rich part of the internet.

It also allows a sort of escape valve from the mainstream news organizations that have traditionally capitalized the attention economy, who have monopolized the ability to say, “we are the taste makers of what you should know.” Allowing a lane for bloggers to participate, to say, “Actually, I don’t know if I agree with that,” or “Here’s the thing you haven’t seen” is really important and fair use allows that to happen. Intellectual property laws otherwise could be abused in this way.

One of the cases I really like is back in 2003, there were a variety of documents floating around that showed that the voting machine company Diebold was indeed aware of flaws in their electronic voting machines. Student activists decided to take those documents and post them right up on the web to say, “This company knew that there were vulnerabilities in these voting machines.” Fascinatingly, the company immediately invoked the Digital Millennium Copyright Act saying, “Nope, you got to take that down. We’re going to go straight to the ISP to have it taken down. We’re going to go after the nonprofit ISP that’s hosting this page.”

The Electronic Frontier Foundation actually ended up taking the case on behalf of these student activists and the court went ahead and found not only that Diebold was in the wrong, it said they were actually abusing the DMCA. They were abusing the copyright regime knowing that there was no infringement there. It’s because we have a transformative use test that they were able to find that: that the purpose the documents were used for was transforming it into social commentary and necessary public discourse, and that’s why we were going to protect it.

Having that escape lever really allows us to escape corporate clutches, because without it everyone would just be publishing press releases all day. This allows you to critique. It allows you to challenge power. It allows you to say, “Hey, you knew this then, and should have done something about it.” That form of oversight I think is invaluable.

If you think about it, traditionally, you may have powerful actors that would not pick a fight with a newspaper, they’re not going to file a copyright lawsuit against The New York Times because it’s going to be ugly. It’s going to draw more attention to the fact, and it would end up causing a Streisand effect. But there’s no such limiting principle when it comes to a student activist or bloggers or other people. You might imagine a powerful entity saying, “you know what? I could crush them; I can bully them into taking this down.” Having an enumerated doctrine, something like the transformative use test, is so valuable to be able to point to because otherwise I think there’s a lot that wouldn’t see the light of day.

One thing I was thinking of when you were talking about that is the case of Peter Thiel and Gawker. Was that fair use?

No. That one was a privacy case. I was at the firm that was defending Gawker in that case, so I got to live that, have a good front row seat. That was not a copyright case at its heart, it was a privacy case. I would argue that it was honestly a privacy case being brought as a vehicle for larger defamation sentiment, right. “You’re harming my reputation,” but because it was true, would’ve been hard to carry a defamation claim so they brought a privacy claim in Florida instead, and then obviously were successful.

The only other piece that I thought could be interesting was the Google vs. Oracle case about reusing software and reusing code.

Yes. Aren’t there cases where a platform might say that data on its platform is intellectual property in the same way that code is?

Yes. You have people who use copyright fairly flagrantly. You have people who will say, “Look, if you are scraping something from our website and you’re lifting all of this copy, there’s a copyright issue here. You are making an unauthorized copy of what’s happening on our website and you can’t just do that.”

What’s interesting about the Google vs. Oracle case in terms of the facts, back in 2005 Google acquired Android and in doing so, their programmers copied between 11,000 and 12,000 lines of code pertaining to the Java API. Oracle sued Google saying, “You can’t just copy the API. You can’t do that. You’re violating our copyright.” Google’s like, “it’s fair use,” and people do this in programming all the time. People are constantly borrowing code everywhere else.

This went all the way up to the Supreme Court, which is fairly notable for copyright cases. This was not resolved quickly. I guess you can imagine two juggernauts like Google and Oracle are going to want to fight it out. The court ran a straightforward fair use analysis and they concluded that the purpose of the work is transformative. Google is using the API to build the Android platform. They’re necessarily using it in a way that’s pretty different from Oracle because Oracle wasn’t building Android.

Notably, the amount of code that they used was 3% of the overall code in the Java API. They’re like, “Oh, yeah, they took some, they took about 11,000, but that’s not all it. It was a small amount.” The last piece that they analyzed was that the Android system is not a substitute for Java, right? This is a different thing. We are transforming this into something else. You have the court saying, “this is well within the provisions of fair use when you just do the straightforward analysis.” I think that was honestly really helpful in a case where there was just a lot of noise being thrown back and forth for the court to step in and say, “We have a really clear four-factor test. We’re going to apply it methodically and here’s where we come out.”

I think that provides a lot of guidance for folks that are otherwise on the leading edge of their research or their data collection to feel like it’s not going to be totally arbitrary. There’s at least some infrastructure for how you think about it, and it is in fact the four fair use factors. While you still have to rely on a judge’s discretion and make sure they understand and apply those factors in a way that you want, knowing there’s four things that people are considering provides a lot more certainty than you usually get with legal issues.

More About

Regulation, Policy, & Platform Governance

This collection takes a two-pronged approach to regulation, including the tech platforms’ own policies and practices. First, it discusses the business practices and regulatory environments that contributed to the current online misinformation environment. It then turns to assessments of recent policy changes at the social media platforms, the role(s) of traditional media, and governmental responses. 

Research Review

The social problems that have spread over the leading internet platforms are driven by more than just ineffective policies. A large body of research indicates that these ills are products of the economic structures that define the consumer internet industry, stemming from its uninhibited collection of data and the way it uses algorithms to manipulate users’ media experiences. This research review outlines the nature of the regulatory environment that gave rise to the business model that is commonly in place in the social media sector.