国产999免费视频|亚洲欧美激情综合首页|动漫人妻h无码中文字幕|国产精品欧美日韩视频一区|美女精品人妻视频一区二区|中文亲近交尾bd在线播放|色五月丁香亚洲高清无码国产|久久一区国产男人操女人的视频

        1. position: EnglishChannel  > AI ripples> Chinese AI Model Emu3 Handles Text, Image, Video Seamlessly

          Chinese AI Model Emu3 Handles Text, Image, Video Seamlessly

          Source: Science and Technology Daily | 2024-12-17 15:44:35 | Author: Gong Qian

          On October 21, the Beijing Academy of Artificial Intelligence (BAAI), a Chinese non-profit organization engaged in AI R&D, released Emu3, a multimodal AI model that seamlessly integrates text, image, and video modalities into a single, unified framework.

          The BAAI research team said Emu3 is expected to be used in scenario applications such as robot brains, autonomous driving, multimodal dialogue and inference.

          Emu3, based solely on next-token prediction, proves that next-token prediction can be a powerful paradigm for multimodal models.

          The existing multimodal AI models are mostly designed for specific tasks. Each has its corresponding architecture and methods. For instance, in the field of video generation, many developers use the diffusion in time (DiT) architecture, as referenced by Sora. Other models such as Stable Diffusion are used for text-to-image synthesis, Sora for text-to-video conversion, and GPT-4V for image-to-text generation.

          In contrast to these models, which have a combination of isolated skills rather than an inherently unified ability, Emu3, eliminates the need for diffusion or compositional approaches. By tokenizing images, text, and videos into a discrete space, BAAI has developed a single transformer from scratch.

          Emu3 outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship models such as SDXL and LLaVA.

          In September, BAAI open-sourced the key technologies and models of Emu3 including the chat model and generation model after supervised fine-tuning.

          Emu3 has been receiving rave reviews from overseas developers. "For researchers, a new opportunity has emerged to explore multimodality through a unified architecture, eliminating the need to combine complex diffusion models with large language models. This approach is akin to the transformative impact of transformers in vision-related tasks," AI consultant Muhammad Umair said on social media platform Meta.

          While next-token prediction is considered a promising path towards artificial general intelligence, it struggled to excel in multimodal tasks, which were dominated by diffusion models such as Stable Diffusion and compositional approaches like CLIP combined with large language models.

          Raphael Mansuy, co-founder of QuantaLogic, an AI agent platform, thinks that Em3 has significant implications for Al development. Mansuy wrote on X that Em3's success suggests several key insights: Next-token prediction as a viable path to general multimodal Al; potential for simplified and more scalable model architectures; challenge to the dominance of diffusion and compositional approaches.

          Editor:GONG Qian

          抱歉,您使用的瀏覽器版本過低或開啟了瀏覽器兼容模式,這會影響您正常瀏覽本網(wǎng)頁

          您可以進(jìn)行以下操作:

          1.將瀏覽器切換回極速模式

          2.點擊下面圖標(biāo)升級或更換您的瀏覽器

          3.暫不升級,繼續(xù)瀏覽

          繼續(xù)瀏覽
          金湖县| 沧州市| 图们市| 海阳市| 河东区| 横山县| 阿巴嘎旗| 大关县| 白沙| 馆陶县| 漳浦县| 宁城县| 沾化县| 中西区| 惠水县| 大悟县| 桃江县| 锦屏县| 阜康市| 夹江县| 阳城县| 博兴县| 新民市| 洛宁县| 东丽区| 岚皋县| 阳曲县| 鄯善县| 鲁山县| 五华县| 苍南县| 九台市| 塘沽区| 金门县| 威海市| 新宁县| 五莲县| 高碑店市| 灵石县| 城固县| 土默特右旗|