news

Challenging Apple? Google releases four AI phones late at night

2024-08-14

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

In the early morning of August 14th, Beijing time, at the 9th Made by Google event held in the United States, Google released the new Pixel 9 series mobile phones equipped with AI large models, as well as a series of smarter Pixel Watch 3 watches and Pixel Buds Pro 2 headphones.

"Android is redefining your phone with Gemini," Google said, completely reconstructing the Gemini assistant experience on mobile phones and other hardware, so users can talk to the assistant as naturally as talking to a person, and it can understand the intention, follow the user's thoughts, and complete complex tasks. Through deep integration with the operating system, Gemini on Android is more powerful.


Specifically, Google has released a total of four AI phones, including Pixel 9, two sizes of Pro models Pixel 9 Pro, Pixel 9 Pro XL, and the foldable screen phone Pixel 9 Pro Fold. The starting prices of the four phones are US$799, US$999, US$1099 and US$1799 respectively.

All new phones are equipped with Google's self-developed Tensor G4 chip, which is Google's most efficient chip to date, allowing you to open applications or browse the web faster. It is reported that Tensor G4 was designed together with Google DeepMind and is optimized to run the most advanced AI large models.

At the same time, in order to support the operation of large models, the three Pro devices are equipped with 16GB of memory, and the basic Pixel 9 is also equipped with 12GB of memory. "This is important when you are trying to run artificial intelligence on the device, and the Pixel 9 series is equipped with an updated Gemini Nano model that adds multimodality, so it can analyze images, voice, and text." Google said.

In a blog on its official website, Google specifically introduced the new AI-driven features of the Pixel 9 series, with highlights including the built-in AI assistant Gemini Live, enhanced photo processing tools, image generator, customized weather forecasts, screenshot information recall, saving call records and details, etc.

Gemini Live was demonstrated at the previous Google I/O conference. It is similar to the voice dialogue mode of GPT-4o. Users can naturally chat with it, ask questions and ideas, and call applications on the system through this assistant. Users can interrupt the assistant during the answer to discuss a certain point in depth, or pause the conversation and come back later. It's like having a partner in your pocket and communicating with it at any time.


"For years, we've relied on digital assistants to set timers, play music, or control smart homes, and this technology has made it easier for us to get work done and save valuable time every day. Now with generative AI, we can provide a new type of assistance for complex tasks." Google introduced that Gemini has been fully integrated into the Android user experience and provides more context-aware features that only Android can achieve.

Just long-press the power button or say "Hey, Google" and Gemini Live will appear, and users can tap the "Ask this screen" suggestion to get on-screen help, and if users are using YouTube, they can ask questions about what they are watching. For example, if you are preparing to travel abroad and just watched a travel video, tap "Ask Video" to request a list of all the restaurants mentioned in the video, and then ask the Gemini assistant to add them to Google Maps.

According to reports, Gemini Live will be open to all Gemini Advanced subscribers, and users who purchase Pro devices will receive one year of Gemini Advanced. Starting today, Gemini Live will be launched in English to Gemini Advanced users on Android phones, and will be expanded to more languages ​​in the coming weeks.

Because Gemini has deep integration built into Android, the phone can do more than just read the screen, it can interact with many of the apps you're already using. For example, you can drag and drop images generated by Gemini directly into apps like Gmail and Google Messages.

"Let's say you're hosting a dinner party. You can ask Gemini to find the recipe Jenny sent you in your Gmail and ask it to add the ingredients to your shopping list. When you're hanging out with college friends, you can ask Gemini to 'make a playlist of songs that remind me of the late 90s.' Without too much detail, Gemini will know what you want." Google said it will launch new extensions in the coming weeks, including some extensions for tasks, programs and music.

In addition to the smart assistant, the Pixel 9 series phones also have some features, including enhanced photo processing tools. For example, when there is no third person in a group photo, how to get the photographer involved has always been a problem. Now based on real-time augmented reality technology, the "Add Me" function can "generate" a group photo.


New AI features for the Pixel 9 series also include the Pixel Studio image generator, which uses generative AI to convert user text prompts into illustrations that can be shared with friends and family via messages. Pixel Studio is a combination of a device-side diffusion model running on Tensor G4 and an Imagen 3 model in the cloud, providing fast text-to-image capabilities on the phone.

In addition, Google AI phones also support a new feature similar to Microsoft recall, which can classify and retrieve information from screenshots. Unlike Microsoft's method, it only works for manually captured screenshots.

Most users have experienced that they took a screenshot of something they wanted to remember on their phone, but couldn't find it when they needed it. For this reason, Google has developed an exclusive application for Pixel Screenshots that can help users save, organize, and recall important information they want to remember. For example, if you have a screenshot of the door code for an upcoming holiday accommodation, but don't remember it when you arrive, you can simply ask Pixel Screenshots to find it for you quickly and easily.

AI has also reconstructed the weather forecast. Google uses artificial intelligence to make this experience more accurate and helpful. Gemini Nano can generate a customized AI weather report to let users know the weather of the day.

There is also a new feature called Call Notes3, which allows users to save private conversation summaries after a call. If you need information such as appointment times, important addresses or phone numbers, open the call log and all the details and call records will be displayed in the call log. To protect privacy, Call Notes runs completely on the device, and if the feature is activated, everyone on the call will be notified.


In addition to Google's Pixel phones, the large model Gemini can also be used on hundreds of phones from dozens of device manufacturers. Google said that it can be experienced on Samsung's Galaxy Z Fold6 and Motorola razr+. In China, Google has previously mentioned that Xiaomi and OPPO phones have also had some cooperation.

"Today, we've reached an inflection point where we believe the help of AI assistants far outweighs their challenges." For now, Google believes that we are still in the early stages of exploring AI assistants, and the AI ​​experience will get better and better in the future.

The latest data from market research firm IDC shows that Pixel phones will account for about 4.6% of the US market in 2023, which is higher than the company's 3.6% share in 2022 and much better than the 1% share in 2021. However, compared with the two giants Apple and Samsung, Google's market share is still at a relatively low level. How much of a variable can AI bring to Google Pixel phones and whether it can challenge its dominant position is still unclear.