Magic is a company focused on revolutionizing software development with its innovative ultra-long context models. These models, known as LTM (Long-Term Memory) models, have the ability to process and understand up to 100 million tokens of context during inference. This is a significant leap forward from traditional AI models that rely on short contexts.
The current evaluation methods for long context models have limitations. These methods often rely on "semantic hints" that make it easier for models to identify the relevant information. This can lead to misleading results, where traditional RNNs and SSMs appear to perform well despite their inherent limitations.
Magic has trained its first 100 million token context model, LTM-2-mini. This model represents a significant milestone in the development of ultra-long context models. Its sequence-dimension algorithm is significantly more efficient than the attention mechanism used in large language models like Llama 3.1.
Magic has partnered with Google Cloud and NVIDIA to build its next-generation AI supercomputers. These supercomputers, Magic-G4 and Magic-G5, will leverage the power of NVIDIA H100 Tensor Core GPUs and NVIDIA GB200 NVL72 GPUs, respectively, to accelerate the training and deployment of Magic's models.
Magic has raised a total of $465 million, including a recent investment of $320 million from leading investors such as Eric Schmidt, Jane Street, Sequoia, Atlassian, and others. This funding will support Magic's continued development and expansion.
Magic is at the forefront of AI innovation, pushing the boundaries of what's possible with ultra-long context models. Its focus on software development, combined with its partnership with Google Cloud and NVIDIA, positions it as a leading player in the future of AI.
Ask anything...