As Part 1 of his contribution to MediaWell’s video essay series on transnational digital governance, law and technology expert Ivar Hartmann (Insper, Brazil) outlines some of the topics dominating the field’s attention – and those that aren’t, but should. Watch or read a transcript of the video essay below.
Introduction
My name is Ivar Hartman and I’m an associate law professor at INSPER in São Paulo, Brazil. I’m a co-founder of the Platform Governance Research Network, and my research covers both regulation of technology, especially online content moderation, and the challenges to online speech, such as disinformation and legal data science, which is to say technology applied to the production of legal scholarship. And I do a lot of research on judicial behavior.
How would you describe the current state of transnational digital governance? What trends do you see, and what do they imply for the field as a whole?
I see two main trends in transnational digital governance.
The first one is the polarization between the US and China. This has sadly affected other areas, not just digital governance, but the effect that it has had on digital governance is that we are often overlooking any potential assessments of what the problems with digital governance are that are provided by countries both in the Global North and South – countries that are not the US and China. So the attempts to define what the problems are… if it’s not about the US or China, it gets overlooked.
But then the clash has also overshadowed these innovative regulatory frameworks that are being created in different countries – and, I would say, especially the ones that we see in the Global South. So all of this gets sort of lost amidst the discussion of data, sovereignty, the race for AI, and the concerns that governments in the US and China have regarding their competition. So that’s problematic, I would say; especially for people who are in the Global South and are not in China, this is a worrying trend.
The other trend, I would say, that you could spot currently is this excessive focus on generative AI. And concerns with AGI. While this is certainly a problem that people need to look into – the lack of regulation, risks that are not well understood, that certainly governments and civil society have not even adequately begun to comprehend, much less address – while that is certainly an important issue, it has taken over the agenda. So if you look at academic research publications, conferences, if you look at op-eds that people from academia are publishing, it seems that everything else has taken the backseat to all these discussions on regulation of AI, especially GenAI and AGI.
While instead we should be giving enough of a focus – enough attention, maybe parity of attention – to issues such as content recommendation. That is certainly something that is not just overlooked, but it’s not even close to being solved in terms of the problematic effects that it has on teenagers, for instance, but also on modern societies. And current attempts to regulate content recommendation on social media have fallen short. So we’re far from being at a point where we can say that is either a problem that is solved, or a problem that we’re very close to solving. And in my opinion, it should very much have as much attention now as governance of generative AI has been given.
I would say we are nowhere near giving enough attention to data protection of platform users – as well as gig workers – for their data, for the collection of their data, the use of their data, which more often than not is taken over or performed by companies in the Global North.
The other important issue that gets overlooked – because of that excessive focus [on AI] – is platform work, and especially how representative it is. Who are the people doing platform work, what are the consequences for communities? What are the consequences for civil society? And again, this is an area where, despite the very large number of people that are affected – not only in the Global South, but in the Global North as well – we are nowhere near reaching some kind of consensus on what the way forward is, in terms of governance. Not only in terms of protecting users, but also protecting gig workers from all sorts of threats.
So this, in terms of the geopolitics of transnational digital governance, is maybe the issue that should be priority number one. That might be a controversial take; most people would feel that it should be about AI or GenAI. I would say we should consider people’s lives first. And while AI already has an impact on a lot of people and their data, for instance, I don’t think it has had so far as near the effect as the collection of personal data by these users and workers, without any sort of regulation – or scarce regulation, at best, in the Global South. And that has led to these low standards being taken advantage of by companies – training models, for instance. So in essence, in a way this obviously connects to AI, but I think it’s a problem in itself that is overlooked.
Lastly, what I would say is a trend as well in transnational digital governance is that we don’t disseminate nearly as much as we should about these stories – about assessments being made in smaller countries and smaller economies, about what the problem actually is with the impact or the use of a certain technology, how they are assessing it, how they are digging in at what the problem is and what its effects are. So there’s a lot of evidence that is there that we don’t pay enough attention to. But also, we are not disseminating evidence that can be collected – that is being collected from concrete attempts to regulate.
One, I think, obvious example is the ban on the use of social media by teenagers and children in Australia. Now obviously, that has made the rounds in terms of news, but I would be very surprised if we see as much research on this – as much academic publications, or even just op-eds or news stories a year from now, after it has gone into force, or two years from now, on all of the effects, or on the actual data.
So while indeed there has been widely disseminated news about that, it’s one of those instances where it’s just – it’s one pop. It’s just, the news went out that the country did this, and then we never hear from it again. And we missed the opportunity to better understand and take lessons from these, like I said, thinking outside of the box attempts to regulate Big Tech.

