news

meta explosion! released sora-like video generation model, stock price hit new high

2024-10-05

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

at the developer conference at the end of september, meta just exploded and released a product that it has been honing for ten years - holographic ar glasses orion, which is billed as "the most advanced glasses to date" and drove the stock price to a record high. on the evening of october 4th, beijing time, meta once again dropped a bombshell and released movie gen, a sora-like video generation model. officials said this was "the most advanced media basic model to date."

as of the close on october 5, meta rose 2.26%, with its stock price reaching a record high of $595.94. since the beginning of this year, meta's stock price has risen by more than 70%, and its latest total market value has reached $1.51 trillion.as meta's stock price continues to rise, its ceo mark zuckerberg has surpassed amazon founder bezos for the first time and becomes the second richest person in the world, second only to musk.

in the official blog, meta stated that the newly released meta movie gen is an advanced immersive narrative model series with four major functions: video generation, personalized video generation, precise video editing and audio generation. judging from the video of the meta demonstration, it has achieved good results in terms of picture beauty, details, smoothness of character movements, and physical laws.

in terms of specific functions, users can upload pictures and use meta movie gen to generate videos that are personalized while maintaining character characteristics and movements. users can also let meta movie gen generate corresponding audio by providing video files or text content.movie gen supports the generation of 1080p, 16 seconds, and 16 frames per second high-definition long videos, as well as the ability to generate high-quality audio of up to 45 seconds.

however, like sora, movie gen is also a "futures" product and is not yet open to the public, and there is no clear timetable. officials say they are actively communicating and cooperating with professionals and creators in the entertainment industry and expect to integrate it into meta's own products and services sometime next year.

according to foreign media, meta vice president connor hayes revealed an important reason for the delayed launch. he said that meta movie gen currently uses text prompts to generate a video that often requires tens of minutes to wait, which greatly affects the user experience. meta hopes to further improve the efficiency of video generation and launch the video service on the mobile terminal as soon as possible to better meet the needs of consumers.

meta says movie gen is trained on a combination of licensed and public datasets.regarding the technical details behind it, the meta ai research team also published a 92-page paper on social media. according to reports, meta’s ai research team mainly uses two basic models to achieve these extensive functions, including movie gen video and movie gen audio models.

movie gen video is a 30b parameter basic model for text-to-video generation, capable of generating high-quality hd videos up to 16 seconds long. the movie gen audio model is a 13b parameter model for video and text-to-audio generation, capable of generating up to 45 seconds of high-quality and high-fidelity audio, including sound effects and music, and synchronized with the video.

it is reported that the model pre-training stage uses a large amount of image and video data and can understand various concepts of the visual world, including object movement, interaction, geometry, camera movement and physical laws.to improve the quality of video generation, the model is also supervised fine-tuned (sft) using a small set of carefully selected high-quality videos and text captions.the report shows that the post-training process is an important stage in movie gen video model training, which can further improve the quality of video generation, especially the personalization and editing functions of images and videos.

in a technical paper, the research team published comparative data between the movie gen video model and mainstream video generation models. because sora is not currently open, researchers can only use its publicly released videos and tips for comparison. for other models, such as runway gen3, lumalabs, and keling 1.5, researchers choose to generate videos themselves through api interfaces.

by comparing the winning rates, movie gen video is significantly better than runway gen3 and lumalabs in terms of overall quality, has a slight advantage over openai sora, and is equivalent to the domestic keling 1.5.

meta, which once encountered difficulties in the metaverse field, successfully reversed its fate in 2024 with the help of generative ai.in early august, jpmorgan issued a report that raised meta's target price from $480 to $610. the report pointed out that meta has performed well recently and believes that it has appropriately invested in key long-term plans, especially ai. at the end of september, jpmorgan chase once again announced that it was optimistic about meta and raised its target price from $610 to $640.

in august this year, meta's 2024 q2 financial report released showed that the company's revenue was us$39.071 billion, a year-on-year increase of 22%, and net profit was us$13.465 billion, a year-on-year increase of 73%, both exceeding wall street analysts' expectations. meta said the company's heavy investment in artificial intelligence has helped improve the performance of its online advertising platform, which is a big reason for the revenue growth.

meta’s revenue has increased by more than 20% for four consecutive quarters. meta predicts that the company's total revenue will reach between us$38.5 billion and us$41 billion in the third quarter of 2024, an outlook that also exceeds analyst expectations.