On the one hand, we are dealing with innovative start-ups in the AI ​​industry, which the EU AI Act clearly wants to defend – especially if they come from Europe. On the other hand, it seems that only market giants such as Meta, Google or Microsoft really seriously use free training because they have a chance to dominate the market. Why such a disproportion if Brussels is trying to stop the giants at the same time using DMA and DSA?

Display

Dornis: It is indeed an almost paradoxical situation. The EU is fundamentally committed to taking the lead on digital issues. The purpose of protecting individual rights and minimum standards is to create spaces and, above all, markets in which competition operates fairly and with respect for all interests involved. With consistent regulations in this area, one can hope that the next step will even be to influence other legal systems, in particular the USA.

Put simply, if global players adhere to EU standards because the EU is an attractive market, this will also increase the level of protection in other countries and perhaps even influence legislation. While this is euphorically lauded as the “Brussels effect” in almost all areas – including digital markets – authors do not appear to be among the groups that need to be protected. At the moment, however, they are the ones who “fall into the background” with their rights during AI training.

The creator of ChatGPT, OpenAI, will say that on the way to universal artificial intelligence it already needs to raise huge sums of money – Sam Altman is currently raising billions. Is there any money left for creators?

Stober: I think that’s a weird idea. The level of compensation for developers whose data was used to train this commercial generative AI should not depend on how much money the company raises. I think it makes more sense to base it on turnover. But I’m really not an expert at this. I focused here on explaining the technical details.

Purely AI-generated content has a completely different problem: it can’t actually be copyrighted. Are we entering a golden era of the public domain created by media corporations?

Stober: I wouldn’t expect for-profit media groups to make the results of their generative AI models freely available. I can’t say anything about legal issues here.

Dornis: Even lawyers can say little about the value of AI-generated content, and certainly nothing about whether we think self-replicating AI creativity is desirable. However, human creativity should always be taken as a starting point. Even the most developed artificial intelligence cannot create anything from nothing. Therefore, it will never work completely without incentives for human creators. So much for the public domain dream.

In the future, can creators become mere supporters of artificial intelligence in order to protect generative content after all? The profession of rapid engineering is currently in vogue, but generative artificial intelligence itself could quickly replace it.

Stober: I don’t actually think that fast engineering is a job with good future prospects. However, I find the “AI helper” image a bit unpleasant.

Artificial intelligence is nothing more than a tool, albeit a very powerful one. It’s up to people to decide how to use it. In this respect, creators would then be henchmen of media corporations in producing content on an assembly line.

What could a form of fair pricing for AI training look like? Buyout agreements are probably rather unfavorable for creators.

Dornis: As a lawyer, I’m not sure what a “fair” price is. However, the law must, of course, ensure functioning compensation mechanisms and processes. The current law cannot do this effectively. Moreover, the current discussion is quite vague, especially when it is claimed that only low royalties can be achieved if creative works are licensed for AI training. This is simply wrong: where there is no market because everything is available for free (as is the case at the moment), market prices cannot develop -. In any case, they are much lower than what a functioning training data licensing market would generate. Under no circumstances should you begin answering questions about future compensation with a reference to the status quo.

Stober: Plus, a societal debate on the overall value of data certainly wouldn’t be a bad idea. The fact that they are currently collected on a large scale by AI companies with a certain self-service mentality and monetized in the form of products and services based on them is perhaps not in the interest of society at large. This applies not only to creators, but to all Internet users.

How do you think the situation will develop now?

Dornis: Extensive legal proceedings are already underway in the US and UK. This will also happen in Germany and on the European continent. We will then see how quickly national courts refer appropriate questions to the European Court of Justice. However, it will take some time. So it will still be exciting.

Stober: This topic will certainly occupy us for a long time. I hope that with our research we will contribute to an objective debate.

(aw)