Summary of Lakera, which protects enterprises from LLM vulnerabilities, raises $20M | TechCrunch

  • techcrunch.com
  • Article
  • Summarized Content

    Lakera Secures Generative AI Applications with $20 Million

    Lakera, a Swiss startup, has secured $20 million in a Series A funding round led by Atomico. Lakera is focused on building technology to protect generative AI applications from malicious prompts and other threats. The company's innovative approach aims to address the rising concern of data privacy and security within the burgeoning generative AI landscape.

    The Challenges of Generative AI Security

    Generative AI, driven by powerful large language models (LLMs), has become a popular tool for various applications. However, the use of these models within enterprise settings raises concerns about data privacy and security.

    • LLMs are trained on vast datasets, potentially including sensitive information.
    • Malicious actors can exploit prompts to manipulate LLMs, forcing them to divulge confidential data or grant unauthorized access.
    • These “prompt injections” pose a growing security risk to organizations.

    Lakera's Solution: AI-First Security for Generative AI

    Lakera aims to address these security challenges by building a "low-latency AI application firewall" that safeguards generative AI applications. Lakera Guard, their core product, acts as a barrier between users and the AI models, analyzing prompts and outputs to detect and prevent malicious activity.

    How Lakera Guard Works

    Lakera Guard utilizes a comprehensive database built from diverse sources, including:

    • Publicly available datasets hosted on Hugging Face.
    • In-house machine learning research.
    • An interactive game called Gandalf, designed to test and learn about prompt injection techniques.

    By analyzing vast amounts of data, Lakera trains its AI models to recognize and block malicious prompts in real time, continuously adapting to emerging threats.

    Lakera's Content Moderation Capabilities

    Beyond prompt defense, Lakera also offers content moderation capabilities. Specialized models within Lakera Guard detect and filter toxic content such as hate speech, sexual content, violence, and profanities. Companies can customize their content moderation policies using a centralized dashboard, ensuring compliance with their specific needs.

    Lakera's Growth and Expansion

    Lakera's latest funding will fuel its global expansion, particularly in the US market. The company has already secured a number of high-profile customers in North America, demonstrating the growing demand for secure generative AI solutions.

    Lakera's Vision for a Secure Generative AI Future

    Lakera's mission is to empower organizations to leverage the power of generative AI while ensuring data privacy and security. By providing an AI-first approach to security, Lakera helps companies confidently adopt generative AI for their business processes, staying competitive in a rapidly evolving technological landscape.

    Key Takeaways

    • Generative AI is a powerful technology with significant security challenges.
    • Lakera's AI application firewall helps to protect against malicious prompts and content moderation threats.
    • The company's innovative approach leverages advanced AI models and a unique interactive game to identify and mitigate risks.
    • With its recent funding, Lakera is positioned to become a key player in the growing AI security market.

    Ask anything...

    Sign Up Free to ask questions about anything you want to learn.