Summary of Google's GenAI facing privacy risk assessment scrutiny in Europe | TechCrunch

  • techcrunch.com
  • Article
  • Summarized Content

    Google's Generative AI Training Practices Under Scrutiny

    The Gemini (formerly Bard) family of large language models (LLMs) has attracted the attention of PaLM 2, a foundational AI model developed by Google, is being investigated by Ireland's Data Protection Commission (DPC) regarding compliance with the General Data Protection Regulation (GDPR).

    • The DPC is investigating whether Google conducted a data protection impact assessment (DPIA) before training its LLMs.
    • The investigation focuses on the potential risks that Google's AI technologies might pose to the rights and freedoms of individuals whose data was used in training.

    GDPR Concerns about Generative AI

    The use of personal data for training generative AI models raises significant privacy concerns. These concerns arise from the potential for these models to generate plausible-sounding falsehoods and the possibility of revealing personal information on demand.

    • The DPC has the authority to impose fines of up to 4% of Alphabet's (Google's parent company) global annual revenue for GDPR violations.
    • The training of generative AI models often involves vast amounts of data, raising questions about the legality of the data acquisition methods and the compliance with copyright and privacy laws.
    • The EU's data protection rules apply to personal information of EU residents used in AI training, regardless of whether it was scraped from the internet or directly acquired from users.

    Previous GDPR Enforcement Actions

    The DPC's investigation of Google is not the first regulatory action against tech companies for their AI training practices. Several companies have faced scrutiny and enforcement actions related to GDPR compliance.

    • OpenAI, the maker of ChatGPT, faced GDPR enforcement actions related to privacy compliance.
    • Meta, the developer of the Llama AI model, paused its plans to train AI using data from European users due to regulatory pressure.
    • Elon Musk's X (formerly Twitter) faced GDPR complaints and a legal battle with the DPC regarding the use of user data for training its Grok AI model.
    • X ultimately agreed to limit its data processing and avoid sanctions, but could still face penalties if the DPC determines violations occurred.

    DPC's Focus on DPIA

    The DPC's investigation emphasizes the importance of data protection impact assessments (DPIAs) for companies using personal data for AI development.

    • The DPC emphasizes that DPIAs are essential for ensuring that individuals' fundamental rights and freedoms are adequately considered and protected during the processing of personal data that could pose a high risk.

    EU's Approach to GenAI Regulation

    The DPC's investigation is part of a broader effort by EU regulators to regulate the use of personal data in the development of AI models and systems.

    • The EU's GDPR enforcers are working to reach a consensus on how to best apply the privacy law to generative AI tools.

    Google's Response

    Google acknowledged the investigation and stated that it takes its GDPR obligations seriously. The company will cooperate with the DPC.

    • Google has not disclosed the sources of data used in training its generative AI models.

    Ask anything...

    Sign Up Free to ask questions about anything you want to learn.