Newsletter
AI the Law and You - Episode #6 - Google Fined by the French Competition Authority
AI the LAW & YOU Newsletter

Release Date:

April 26, 2024

Episode Transcription:

 

[00:00:00] Mark Miller: The real issue here that I read in the French decision is that, I’ll put it in American terms, Google is not negotiating in good faith.

The part of the negotiations are who do you negotiate with and who pays who when things are settled. And I think that’s Google’s case that they’re coming back with to say, you haven’t defined the rules of the game or else they keep switching, so we don’t even know who we’re dealing with anymore.

We have a really fascinating topic today, Joel. I’ve been following it for the last week. To get us started, give us from a legal aspect, what’s going on in France with Google?

[00:00:54] Joel MacMull: Yeah, sure. And in particular, of course, Google’s AI, model Bard, the French competition authority last week said the tech giant, Google, failed to negotiate fair licensing deals with media outlets and did not tell them it was using their articles to train its chatbot. And as a consequence, it fined Google about 270 million US dollars. The fine was in Euros, but that’s roughly what we’re dealing with in terms of a conversion rate.

So it’s not nothing, but also for one of the largest tech companies in the world, it’s, it’s, you know, certainly not going to make a material difference to their bottom line. But it outlines, I think, some interesting issues, particularly when we contrast that to what’s going on now in the United States and some of the litigation we’re seeing against OpenAI.

Specifically, I’m thinking about the New York Times case that was filed, I think, in December. against OpenAI and Microsoft, because there are some real parallels here, that, if we assume, and I think it’s a big assumption, that the application of the French model that got Google in trouble, I think, that does not put openAI and Microsoft in a good position. Again, we’re talking about separate copyright rulings, France has a much broader and most of Europe has a much broader copyright protection than the United States does. And they consider all kinds of things that are not part of US copyright law.

Specifically, I’m thinking of things like moral rights. Moral rights is not a concept in US copyright law, and, I’m probably not doing it, I’m doing it a disservice by describing it, but I’ll do so anyway. I mean, moral rights basically are, even when you assign rights in a copyright, the author retains some what’s known as moral rights.

Again, in the United States, when you have a work for hire, for example, and you completely surrender those rights, the author does not retain any residual rights. So that’s just one example of where there’s difference between the United States and French law.

[00:02:53] Mark Miller: When I was looking at this, I came up with two issues with what’s going on in France. The first, I think, overrides the one, everybody’s jumping on the AI training one because it’s a kind of a buzz meme right now, AI training. But the real issue here that I read in the French decision is that, I’ll put it in American terms, Google is not negotiating in good faith.

They are not following the regulations that were set up when they were sued in 2022 and lost for 500 million pounds. The part of the negotiations are who do you negotiate with and how do you get who pays who during, when things are settled. And I think that’s Google’s case that they’re coming back with to say, you haven’t defined the rules of the game or else they keep switching, so we don’t even know who we’re dealing with anymore.

[00:04:01] Joel MacMull: Yeah, I think that’s right. There’s a couple of different things going on. Like, on the one hand, they seem to say that you’re not negotiating in good faith these licensing deals. And the other thing was that I picked up on in reading about this was that, Google had made certain promises. Google had at some point made certain promises to, presumably negotiating good faith, to presumably compensate these media companies, and then it didn’t do so. It was using these materials as fodder, these various French publications as fodder for its language model, presumably wasn’t compensating anyone, but you’re right, Google’s response to this and again, Google said a couple of different things.

They said, first of all, the punishment doesn’t fit the crime. They felt that the fine was excessive in light of the allegations. But they also said, you know what, we’re going to accept it. We’re just going to pay for it. Because, at the end of the day, we want to be providing our users with good content and this is how we do it. But one of the things Google noted, and I think you just raised this, frankly, in terms of, royalties and that sort of thing, that was never finalized. So I think, to some extent, and it’s a legitimate argument, Google is saying, we’re being held to a standard that itself was not properly defined.

[00:05:23] Mark Miller: One of the things that Google professes not to understand is which publishers are in scope in this agreement. I mean, they actually use those words. What publishers are in scope according to the new and updated directives? That’s a tough one because how do you, in a legal sense, handle a moving target when it comes to honoring the regulations?

[00:05:52] Joel MacMull: I agree with you. If the goalposts are constantly changing, that, that makes it very difficult to comply. There is a concept, in U. S. jurisprudence, which falls under sort of a constitutional principle, which is that a law can be arbitrary and capricious. And part of that doctrine includes, I would think, something like this. where it’s not where a regulator hasn’t clearly defined what the regulation is, but yet, nevertheless, has no trouble finding an entity that it then claims is not in compliance.

[00:06:31] Mark Miller: That’s what Google is claiming right now.

[00:06:34] Joel MacMull: Now, again, that’s a, I’m not a French lawyer, but I, as I said, I have from time to time in my career litigated those sort of constitutional issues such that they are, the regulations are arbitrary and capricious.

Because what we’re talking about here, of course, are not private regulations, but these are governmental imposed regulations, which means that they have to necessarily comport with a variety of things, but not the least of which is the U. S. Constitution.

[00:06:56] Mark Miller: The thing that Google has going in their favor, too, is that they have negotiated deals with 280 companies in France, which control 450 publications. It’s not like they haven’t been trying to negotiate. To do it. Is the French Commission not accepting that there is negotiation in good faith, or are they saying you’re not negotiating according to the terms that we laid out?

[00:07:28] Joel MacMull: I’m not sure. I’m not sure I read anything that was that detailed that made that d that, that distinguished between the two conditions that you just mentioned.

[00:07:37] Mark Miller: When you look at it, and I’m thinking of France, the size of France and the number of publications, it seems to me that 280 companies controlling 450 publications is significant.

[00:07:52] Joel MacMull: I don’t doubt that. Is this where the, they said, well, you negotiated with us and I’m just interpolating here, well, you negotiated with us, but you didn’t tell us how you were going to use this. And that’s where the AI training comes in.

I read it a little bit differently. I read it as you promised to negotiate and you didn’t. And you essentially took our content without compensating us.

See, but this is my problem. And, maybe I just don’t, maybe my brain isn’t big enough to think about these sort of global events. But one of the problems, and this would be true of AI and any number of things, I wonder, I worry about the sustainability of a system, and by system I mean the use of AI lawfully, that’s going to be predicated on technology companies having independent rules with respect to each nation. I don’t know how I don’t know. It strikes me as being very fractured. Yes. An And, you’re going to have, in the case of France, as you said, you’re going to have 450 publications that let’s say Google is justified In training its model on. Great. Just move next door. What about Spain or Andorra or, I just don’t, it just seems like a mess to me. Like it almost seems like there may need to be almost in the way that and I don’t know if this is a reasonable analogy, but it comes to mind in the way that ICANN was set up for the purposes of regulating domain names internationally.

Now, of course. Ah, interesting. It really is an instrument of Congress, and I’m not saying at the end of the day, in the world of AI, that that necessarily has to be administered. By the US, but I do think there is at some level, there’s going to have to be some, uniformity that’s afforded to all of this because it just, it just strikes me as a mess otherwise, in terms of, and frankly, I think it’s unfair to tech companies.

If everything they’re going to do at the end of the day is going to have to comply with local, countrywide regulation, where, of course, there can be such variants.

[00:10:11] Mark Miller: It’s interesting because now we’re talking about jurisdiction, and there is no universal jurisdiction on this, or pretty much anything. Even when you talk about something along the Geneva Conventions, it’s supposed to be a universal agreement, but each company implements it the way that they want to. So the way I’m looking at it for the future is, do we get the Pareto principle? Do we get 80 percent agreement around the major things, and then each country customizes the final 20 percent on what they want to do?

It’s really necessary at this point, because Google is the whipping boy right now on this one. But it is applicable to all AI companies at this point that are consuming publicly available data.

[00:11:09] Joel MacMull: And, that immediately, at least here in the United States, it implicates, two obvious others.

One is Meta, of course, Facebook and Instagram. And, of course, I guess the most obvious one would be OpenAI,

[00:11:20] Mark Miller: Which is Microsoft, basically. It’s a tough one. So as we look from a legal aspect, and I know that you, are just going to give an opinion on this one, but in general, do you know of any laws that are universal across all countries?

[00:11:41] Joel MacMull: Oh, sure. Okay, I’m sorry. There are not laws, but there are treaties that harmonize certain laws. I’ll give you an example. The Berne Convention, okay? Or the Paris Convention. Paris Convention basically, you know, I mean, I think it’s over a hundred years old, it was 20s or 30s, where it basically said, listen, if someone is, we’re going to recognize, by virtue of being a signatory to that treaty, we’re going to recognize those rights in a foreign jurisdiction.

In other words, depending on which convention we’re talking about, a U. S. copyright holder, a U. S. trademark holder, if it wants to essentially, transport its rights and have those same rights exist in France, for example. France and the United States are both signatories to both of those agreements, the Berne Convention and the Paris Convention. And while the existence of those laws may be regional, and I’ll give you an example.

In the United States, for example, To have a trademark right, it necessarily means you have to have use of the mark. You cannot warehouse trademark rights in this country as I believe you can in other countries, right? In other words, it’s got to come to fruition, it’s got to be in use. If a U. S. trademark holder wants to enforce its mark in Europe, France in particular, let’s say, it can lean on the treaty for purposes of seeking to enforce that.

Even though France may have different national rights as it relates to, for example, trademarks. It’s sort of a long winded way of saying, yes, there are treaties in place that are intended to harmonize laws across borders.

 

[00:13:21] Mark Miller: When we’re looking at the, let’s say, medium term, three to five years, from your experience on what it takes to get regulations right and enforceable, it seems to me that we’re at the very beginning of the process, and it’s almost like a trial run. We’re gonna stick this regulation out here and see what happens and then start molding it from here. Does that sound right to you?

[00:13:51] Joel MacMull: I came across something a couple of weeks ago, that talked about there’s a bill being kicked around in Congress, and, I don’t even know, I, I can’t speak to what development stage it is, but again, confining ourselves to the U.S. model, there is, I think, an appreciation of those that care in Washington, that there needs to be, rather than an ad hoc approach to all of this, there needs to be a real comprehensive evaluation on issues surrounding AI. I see stuff like this all the time pop into my email box, that talks about AI and regulations.

Yes, I agree with you. I think we’re at the infancy of development of any coherent regulation. I also think that this is super important and really demands a kind of thorough analysis, presumably to be done by Congress in the first instance, that’s going to, really address AI, the use of AI we know from, again, knowing what we know now, and again from an ad hoc perspective, we know that AI is really, a sharp burr in the side of copyright law in this country. And there are those that think that copyright law, has a very real chance of destroying this burgeoning industry.

So we’ll see what happens, but yes, no, I agree with you, I think we’re at the infancy, but I also think how this is dealt with, and the comprehensiveness of how this is going to be dealt with, really is going to go a long way in cleaning this up, because right now it’s just a mess. It’s a mess.

Episode Guest:

SUBSCRIBE