Generative AI (or "GenAI") is becoming a major buzzword, with tools like ChatGPT gaining immense popularity in a short period. While exciting, the rise of these tools has also raised concerns, leading to actions like Italy's data protection authority temporarily blocking ChatGPT and discussions in the US Senate and EU institutions about the need for regulation.
The concentration of data and model ownership by large tech companies raises concerns about market concentration and potential anti-competitive practices like tying, self-preferencing, and refusal to grant data access. Competition law and the Digital Markets Act (DMA) may address these issues, but their applicability to GenAI services needs clarification.
The EU's Copyright Directive establishes a text and data mining (TDM) exception, allowing GenAI tools to access copyrighted works for training if the copyright holder hasn't explicitly reserved the right. However, this opt-out system imposes a burden on content creators, and the AI Act's proposal for transparency on copyrighted training data needs further clarification.
The lack of attribution to trusted media sources by ChatGPT prevents brand awareness and direct audience relationships. The Platform-to-Business Regulation addresses brand attribution, but its applicability to GenAI services is unclear.
While ChatGPT and generative AI offer opportunities for the media sector, they also pose significant challenges related to copyright infringement, brand attribution, content moderation, transparency, and accountability. Existing and proposed EU regulations like GDPR, the Digital Services Act, the Digital Markets Act, and the AI Act may address some of these issues, but further clarification and revisions are needed to ensure these regulations deliver for the media sector.
Ask anything...