AI在线 AI在线

Silicon Base Flow Launches the World's First Open Source Large-scale Hybrid Attention Reasoning Model MiniMax-M1-80k

The Silicon Cloud (SiliconFlow) has officially launched the world's first open-source large-scale hybrid attention reasoning model — MiniMax-M1-80k (456B). This innovative model is designed to provide strong support for complex tasks such as software engineering, long-context understanding, and tool usage, and its performance can rival leading models like o3 and Claude4Opus.It is reported that MiniMax-M1-80k supports a maximum context length of up to 128K, greatly facilitating the handling of long texts. For users with special needs, the platform also provides backend support to meet the demand for 1M long contexts.

The Silicon Cloud (SiliconFlow) has officially launched the world's first open-source large-scale hybrid attention reasoning model — MiniMax-M1-80k (456B). This innovative model is designed to provide strong support for complex tasks such as software engineering, long-context understanding, and tool usage, and its performance can rival leading models like o3 and Claude4Opus.

image.png

It is reported that MiniMax-M1-80k supports a maximum context length of up to 128K, greatly facilitating the handling of long texts. For users with special needs, the platform also provides backend support to meet the demand for 1M long contexts. The cost of using this model is ¥4 per million tokens for input and ¥16 per million tokens for output. New users can enjoy an experience bonus of 14 yuan when they first use it.

image.png

The MiniMax-M1-80k model is developed based on MiniMax-Text-01, adopting a hybrid expert system (MoE) architecture and Lightning Attention mechanism, featuring an efficient reinforcement learning expansion framework. This design allows the model not only to handle traditional mathematical reasoning tasks but also to function in real sandbox software development environments. Therefore, it becomes an ideal choice for tasks requiring long text processing and deep thinking.

In terms of performance, MiniMax's benchmark tests show that the model consumes only 25% of the FLOPs of DeepSeek R1 when generating 100,000 tokens, indicating its highly efficient inference scaling. Meanwhile, MiniMax-M1-80k's evaluation results in areas such as mathematics, programming, tool usage, and long-context understanding are on par with top-tier models like o3.

SiliconCloud, as an all-in-one large model cloud service platform, is committed to providing developers with high-quality services. In addition to MiniMax-M1-80k, the platform has launched several excellent models, including DeepSeek-R1-0528, Qwen3, GLM-4 series, etc., greatly enriching developers' options. Especially the distilled version of DeepSeek-R1 and other multiple models are available for free use, helping developers achieve "Token freedom."

To experience the newly launched MiniMax-M1-80k, users can access it online or integrate it through the official website of SiliconCloud, further promoting the development of generative AI applications.

Online Experience

https://cloud.siliconflow.cn/models

Third-party Application Integration Tutorial

https://docs.siliconflow.cn/cn/usercases/

相关资讯

硅基流动上线全球首个开源大规模混合注意力推理模型 MiniMax-M1-80k

硅基流动(SiliconCloud)正式上线了全球首个开源的大规模混合注意力推理模型 ——MiniMax-M1-80k(456B)。 这一创新模型旨在为软件工程、长上下文理解和工具使用等复杂任务提供强有力的支持,并且在性能上可与目前顶尖的模型如 o3和 Claude4Opus 相媲美。 据悉,MiniMax-M1-80k 具备最大支持128K 的上下文长度,这为处理长文本提供了极大的便利。
6/17/2025 5:01:30 PM
AI在线

MiniMax-M1开源!1M超长上下文推理,AI新王者仅53万美元打造!

中国AI公司MiniMax正式宣布开源其最新大型语言模型(LLM)MiniMax-M1,该模型以超长上下文推理能力和高效训练成本引发全球关注。 AIbase整理最新信息,为您带来MiniMax-M1的全面解读。 创纪录的上下文窗口:1M输入,80k输出MiniMax-M1以其惊人的100万token输入和8万token输出的上下文窗口,成为目前开源模型中最擅长长上下文推理的佼佼者。
6/17/2025 10:02:37 AM
AI在线

全球领先水平!MiniMax开源首个推理模型M1

不知道还有多少人记得,AI 行业的六小虎。 行业内都在说,他们已经沉寂好久了。 上一次相关的项目发布,还是前一段时间我写的 MiniMax 声音模型的更新,Speech-02。
6/18/2025 7:42:52 AM
数字生命卡兹克
  • 1