news

​keling, luma, and runway are back! how far is ai video from being a blockbuster?

2024-09-21

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

in the past week, the field of ai video has been filled with smoke, and a new arms race is unfolding.

first, two major players, runway and luma, launched api services at the same time. then, the 1.5 version of keling was launched with a high profile, bringing the "brush" function to serve professionals. for a time, the server was overwhelmed by enthusiastic users.

the following is an advertising video made by keling user @希希叔叔 using version 1.5. you can see to what extent ai videos have evolved today.

ai video production seems to have entered the fast lane. how far is it from producing blockbusters comparable to professional productions? which ai video application will become the leader in this field? what are the future development trends?

at 8 pm on thursday, wang yuquan answered the above questions one by one in the qianshao live broadcast, and interpreted the development process and end point of the ai ​​video industry with exclusive data and industry insights that are not available on the market.

today, let’s review the highlights of the live broadcast, watch the amazing ai videos, and see the future of this business war.

1. why has the competition in ai video suddenly accelerated?

before understanding the ai ​​video business landscape, let’s first look at the leading players who have suddenly become popular in the past week.

overseas, runway and luma have extended the battle to the developer ecosystem and launched api services one after another.

runway first released the "video to video" function. with this function, ai becomes a special effects master that can convert the style of any video, add exaggerated special effects, and turn street scenes into alien planets or forests.

a few days later, runway brought another update, an api service based on the gen-3 alpha turbo model. officials said that the generation speed has increased by 7 times and the cost has been reduced by 50%. it focuses on high-fidelity video generation and fine motion control, trying to attract developers who pursue extreme effects.

not to be outdone, luma released the dream machine api service almost at the same time, focusing on multimodal input and personalized customization, trying to attract developers with more flexible functions.

in china, keling, which got started with its performance advantages and free use, suddenly released a version 1.5 update on september 18.the new model not only has a resolution of 1080p, but can also generate 4 videos at once, with clearer images and smoother movements.

this update of keling also brings the motion brush function, which has long been a standard feature of runway and is also the reason why it was favored by many film companies in the early days.

the brush function allows users to specify ai to control specific areas to generate animations, accurately control the movement trajectory of screen elements, and provide users with a larger creative space.

the ai ​​video competition seems to be on the fast track again. what is the reason for this? using qianshou’s analytical framework, this is exactlytechnical performance tuning is not yet complete, a sign that business transformation is still being explored

all ai video companies are trying every possible way to integrate into the film and television industry and build their own application ecosystem by opening api interfaces, imitating competitors, and adding functions that users need.

why do you say that? let’s continue our conversation.

2. asml's ai promotional video gives a glimpse into the industry ceiling

every time an ai video application is updated, it can stimulate the viewers' nerves with its superb display ability. however, if we return to the reality of the industry, we will find that these eye-catching updates have not significantly changed the film and television production process. in the words of qianshao: the performance tuning of various ai video generation tools today has not been completed, and they are still being used as "industry enhancements."

let’s take the asml technology promotional video “standing on the shoulders of giants” as an example. you can click to watch the video first, and then we will continue the discussion.

isn’t this film produced by a professional team amazing? you will be even more amazed when you learn about its production process.

the asml production team first used 1,963 midjourney prompts to generate 7,852 images, then used more than 900 computers to render and optimize them, and then used runwayai to process and generate the final short film.

during the production process, the scene of isaac newton and his apple traveling between the planets was the most difficult, requiring more than 20 attempts and producing more than 9,800 frames of images before the film was released.

this is one of the most common workflows for integrating ai video tools into film and television production, integrating ai image generation, ai video editing and traditional rendering technology.aiwhat really matters is still the labor.

wang yuquan introduced the common workflow of ai video production in the live broadcast. after listening to it, you will find the biggest pain point of various ai video tools: ai video tools cannot really help you make a complete video. you need more ai pictures/music and other tools to combine with each other.

3. three paths for ai video business

ai video applications are still being optimized, and commercialization is also in the breakout stage.

during the live broadcast, we shared data on traffic growth and annual revenue of ai video applications, and also explained in detail the three paths of ai video:

1. runway leads the way in providing professional services to large b-sized companies:

runway was one of the earliest entrants. the oscar-winning film "blink" used runway extensively to produce special effects. at that time, it was still the gen-1 model, and achieved the effect of 8 people replacing a 200-person special effects team.

recently, runway announced a partnership with lionsgate entertainment, allegedly to train a dedicated movie ai video model.

there is huge potential for charging for professional services for large b-side companies, but the requirements for technology and business cooperation are also high, and not all ai video platforms can participate.

2. platform integration for small bs:

the small b market is now becoming the focus of competition among various companies. it does not have the high requirements of large commercial customers, but has a stronger and more lasting willingness to pay than c-end customers. it is indeed a good field.

the first to be affected are naturally content producers such as video producers.

however, the competition in this field may be greater than you think.

as a design platform for small and medium-sized creators, canva has integrated runway's ai video generation service into the platform, providing users with more convenient video production tools. the integration of ai functions in the youtube editor will provide video creators with a more intelligent editing and editing experience.

3. applications for the c-end, where keling and luma started:

the c-end market has always been a good place for rapid growth. relying on its first-mover advantage, luma achieved an astonishing 1010% quarterly growth for ele.me. similarly, keling quickly became a leading player in ai video by relying on free use.

however, c-end applications also have greater variables. veggle, which uses ai video technology to make ordinary people's photos dance, saw a 6-fold increase in c-end traffic, but also saw a rapid decline in traffic due to its single function.

the c-end market is changing rapidly. not only do you need to understand what technology can do, but you also need to understand what your users need!

4. insights into future trends from new applications

during the live broadcast, wang yuquan shared his judgment on the future trends of ai video applications, which can be simply summed up in two words: integration.

we can get a glimpse of this trend from the notebooklm+heygen case.

in this case, google's notebooklm can automatically interpret the paper and generate a voice program of a two-person conversation. by converting the voice into a lip-matching video of a virtual character through heygen, a complete paper interpretation video can be produced.

if someone can integrate all of this together one day, does it mean that ai podcast conversations will become a new content category?

of course, ai podcast conversations may not necessarily be the best choice, but the same logic will definitely happen in the field of ai video!

the future of ai video is full of unlimited imagination. with the continuous advancement of technology, ai video production will become more convenient and efficient, and its application scenarios will become more extensive.

perhaps in the near future, everyone will be able to become a "director" and use ai tools to create their own "blockbuster". and ai video will also become a new way of expression, changing the way we record life and share stories.