AI在线 AI在线

Xiaohongshu makes a major move! The all-new open-source large model dots.llm1震撼登场 with 142 billion parameters!

Recently, the hi lab team of Xiaohongshu officially released its first open-source text large model — dots.llm1. This new model has attracted extensive attention in the industry due to its outstanding performance and massive number of parameters.dots.llm1 is a large-scale Mixture of Experts (MoE) language model with an impressive 142 billion parameters, including 14 billion activated parameters. After being trained on 11.2 TB of high-quality data, this model's performance can rival Alibaba's Qwen2.5-72B.

Recently, the hi lab team of Xiaohongshu officially released its first open-source text large model — dots.llm1. This new model has attracted extensive attention in the industry due to its outstanding performance and massive number of parameters.

dots.llm1 is a large-scale Mixture of Experts (MoE) language model with an impressive 142 billion parameters, including 14 billion activated parameters. After being trained on 11.2 TB of high-quality data, this model's performance can rival Alibaba's Qwen2.5-72B. This means that dots.llm1 not only exhibits extremely high accuracy and fluency in text generation but also supports more complex natural language processing tasks.

Xiaohongshu

It is worth noting that the pre-training process for this model did not use synthetic data; all data came from real-world scenarios with high-quality text. This gives dots.llm1 a distinct advantage in understanding the subtlety and naturalness of human language, providing users with a more realistic interactive experience.

Xiaohongshu’s decision to open-source this model marks its further expansion in the field of artificial intelligence, demonstrating its ambition in technological innovation. Open-sourcing not only helps enhance community participation and contributions but also provides developers with more opportunities to explore and apply this powerful tool.

As a platform centered on content sharing and social interaction, Xiaohongshu has been striving to improve user experience and technical capabilities. By launching dots.llm1, Xiaohongshu hopes to provide users with more intelligent services while encouraging more developers to participate in research and practice in the field of artificial intelligence.

In the future, we look forward to seeing dots.llm1 showcasing its potential in more fields, such as content creation, intelligent customer service, and more complex conversational systems. Without a doubt, Xiaohongshu is pushing the boundaries of artificial intelligence in its own way.

相关资讯

​小红书发布首个开源大模型dots.llm1:11.2 万亿非合成数据助力中文性能提升

小红书近日宣布开源其首个大规模模型 ——dots.llm1,该模型具有1420亿个参数,是一种专家混合模型(MoE)。 其设计的一个显著特点是,在推理过程中仅激活140亿参数,这种结构不仅保持了高性能,还大幅降低了训练和推理的成本。 dots.llm1使用了11.2万亿个非合成的高质量训练数据,这在当前开源大模型中显得非常罕见,表明小红书在语言处理方面的强大资源。
6/9/2025 10:00:56 AM
AI在线

小红书重磅出击!全新开源大模型 “dots.llm1” 震撼登场,参数量达 1420 亿!

近日,小红书的 hi lab 团队正式推出了其首个开源文本大模型 ——dots.llm1。 这一新模型以其卓越的性能和庞大的参数量引起了业界的广泛关注。 dots.llm1是一款大规模的混合专家(MoE)语言模型,拥有惊人的1420亿个参数,其中激活参数达到140亿。
6/10/2025 6:00:56 PM
AI在线

New BeanPod Video Generation Model to Be Released Tomorrow with Support for Seamlessly Multi-Camera Narration and Other Functions

Tomorrow, the 2025 FORCE Original Power Conference will be held in grand style. During the conference, the capability upgrade of the DouBao large model family will be unveiled. At the same time, the highly anticipated new DouBao · Video Generation Model will also be officially released.According to reports, the new DouBao · Video Generation Model has several outstanding features.
6/16/2025 9:49:01 AM
AI在线
  • 1