AI在线 AI在线

New BeanPod Video Generation Model to Be Released Tomorrow with Support for Seamlessly Multi-Camera Narration and Other Functions

Tomorrow, the 2025 FORCE Original Power Conference will be held in grand style. During the conference, the capability upgrade of the DouBao large model family will be unveiled. At the same time, the highly anticipated new DouBao · Video Generation Model will also be officially released.According to reports, the new DouBao · Video Generation Model has several outstanding features.

Tomorrow, the 2025 FORCE Original Power Conference will be held in grand style. During the conference, the capability upgrade of the DouBao large model family will be unveiled. At the same time, the highly anticipated new DouBao · Video Generation Model will also be officially released.

According to reports, the new DouBao · Video Generation Model has several outstanding features. In terms of multi-lens narrative, through efficient model structure design, as well as multimodal positional encoding and multitask unified modeling technology, this model can seamlessly support unique and stable multi-lens expression, making video narration more smooth and natural.

WeChat_Screenshot_20250610163630.png

In terms of multi-action and free camera movement functions, the model has fully learned rich scenes, subjects, and actions, enabling it to respond more accurately to user fine instructions and smoothly generate complex video content containing multiple subjects, actions, and free camera movements, meeting users' diverse creative needs.

Moreover, this model also performs excellently in terms of stable motion and realistic beauty. The dynamic effects of the画面and subjects are more natural, structurally stronger, and have a lower rate of distortion. Users can generate high-quality video content in different styles such as realistic, anime, film and television, and advertising according to their instructions, providing more possibilities for creative expression.

The capability upgrade of the DouBao large model family and the release of the new video generation model will bring new development opportunities to related fields.

相关资讯

Alibaba Open Sources All-in-one Video Foundation Model to Empower Video Generation and Editing

On the evening of May 14th, Alibaba officially launched Tongyi Wanxiang Wan2.1-VACE, which is currently the most comprehensive video generation and editing model in the industry. The highlight of this model lies in its multiple powerful capabilities, enabling it to simultaneously achieve text-to-video generation, image-based video generation, video retouching, local editing, background extension, duration extension, and other foundational generation and editing functions. This innovative product further lowers the threshold for video production, allowing more creators to easily get started..
5/15/2025 10:01:52 AM
AI在线

Alibaba Qianwen Wan2.1-VACE Open Source Claims to Be the First Open-source Unified Video Editing Model

Wanxiang "Wan2.1-VACE" has been announced as open-source, marking a major technological revolution in the video editing field. The 1.3B version of Wan2.1-VACE supports 480P resolution, while the 14B version supports both 480P and 720P resolutions. The emergence of VACE brings users a one-stop video creation experience, allowing them to complete various tasks such as text-to-video generation, image reference generation, local editing, and video extension without frequently switching between different models or tools, greatly improving their creative efficiency and flexibility..
5/15/2025 10:01:53 AM
AI在线

大模型研究获最佳论文,WWW 2024奖项出炉

本次公布的获奖论文中,有多位华人作者。The Web Conference(前身为 International World Wide Web Conference,WWW)会议是万维网领域的顶会,由图灵奖得主 Tim 创办,被中国计算机协会将其认证为 CCF-A 类会议,每年举办一次。目前大会公布了最佳学生论文奖、最佳论文奖以及时间检验奖。最佳论文奖WWW 2024 最佳论文颁给了和大模型相关的研究。标题:Mechanism Design for Large Language Models论文地址::Paul Dü
5/18/2024 4:06:00 PM
机器之心
  • 1