news

The United States launches the "Manhattan Project 2.0". Has AI entered the Oppenheimer moment? 6 billion is spent on drones, and 800

2024-07-15

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


New Intelligence Report

Editor: Aeneas is so sleepy

【New Wisdom Introduction】Manhattan Project 2.0 is coming? As of now, the US military has more than 800 active AI projects and has applied for $1.8 billion in funding for AI in 24 years alone. In the next five years, the United States will also allocate $6 billion for the research and development of unmanned collaborative fighter jets. Now, AI seems to have entered the Oppenheimer moment.

Artificial intelligence has entered the Oppenheimer moment.

Now, AI weapons are constantly being used for military purposes, and related industries are booming.

The multibillion-dollar AI arms race has engulfed Silicon Valley giants and countries around the world.

Intensifying conflicts around the world are both an accelerator and a testing ground for AI warfare. Militaries around the world have great interest in AI, and there is currently a lack of regulation in this area.


The U.S. military already has more than 800 active AI projects and has requested $1.8 billion worth of funding for AI in its budget for 2024 alone.

AI, taking root in militaries and governments around the world, is likely to fundamentally change society, technology, and warfare.

Drone advertising becomes reality

A team of soldiers is under rocket fire during close-range urban combat.

One of them made a call on the radio, and soon after, a fleet of small autonomous drones equipped with explosives flew in.

These suicide drones fly into buildings and begin scanning for enemies. Once they find their targets, they detonate on command.

The above picture is from an advertisement by the weapons company Elbit, which promotes how AI drones can "maximize lethality and combat tempo."


Now, the technologies developed by Elbit are increasingly entering the real world.

“Over time, we’re likely to see humans cede more judgment to machines,” said Paul Scharre, executive vice president and research director at the Center for a New American Security think tank.

"If we look back in 15 or 20 years, we'll realize we've crossed a very important threshold."


In 2023, a drone with integrated AI detects explosive devices.

The United States spends $1 billion on the "Replicator Project"

Although AI development has seen a surge in investment in recent years, the development of autonomous weapons systems in warfare can be traced back decades.

Of course, these developments rarely appear in public discussion and are instead the subject of study by a small number of academics and military strategists.

But now, as public attention to AI continues to grow, whether weapons are truly "autonomous" has also become a hotly debated topic.

According to experts and researchers, we can understand "autonomy" as a spectrum rather than a simple binary concept.

But they generally agree that machines are now able to make more decisions without human input than ever before.


And money is pouring into companies and governments on the promise that AI can make warfare smarter, cheaper, and faster.

The Pentagon plans to spend $1 billion by 2025 on its "Replicator Program," which aims to develop large numbers of unmanned combat drones that use AI to hunt for threats.

Over the next five years, the U.S. Air Force plans to allocate approximately $6 billion for the research and development of unmanned collaborative fighter jets to build a fleet of 1,000 AI fighter jets that can fly autonomously.

The Pentagon has also raised hundreds of millions of dollars in recent years to fund a secretive artificial intelligence program called Project Maven, which focuses on technologies such as automatic target recognition and surveillance.


British soldiers use AI during exercise

Technology companies are signing huge contracts

At the same time, the growing military demand for AI and autonomy has helped technology companies and arms dealers win huge orders.

Anduril, a company developing autonomous attack drones, unmanned combat aircraft, and underwater vehicles, is raising a new round of venture capital at an estimated $12.5 billion valuation.


Anduril's founder, Palmer Luckey, is a 31-year-old billionaire who this year signed a contract with the Pentagon to build a drone fighter program.

Silicon Valley billionaire Peter Thiel also founded Palantir, a technology and surveillance company. It has participated in AI projects such as the US Army's "first AI-defined car".


In May, the Pentagon announced it had awarded Palantir a $480 million contract for AI technology that helps identify enemy targets.

Palantir's technology is already being used in several military operations.


Palantir involved in US Army's 'first AI-defined vehicle'

Anduril and Palantir, named after the holy sword and the philosopher's stone from The Lord of the Rings, are just a small part of the international AI war gold rush.


Helsing raised nearly $500 million in funding for its AI defense software and was valued at $5.4 billion this month.

Meanwhile, Elbit Systems disclosed in a financial filing in March that it had signed $760 million in ammunition contracts through 2023. And in the past year, revenue reached $6 billion.


Helsing raised nearly $500 million for its AI defense software, valued at $5.4 billion this month

Big tech companies have also been more open to the defense industry and its use of AI than in past years.

In 2018, Google employees protested the company's participation in the military's Project Maven, believing that it violated ethical and moral responsibilities. Under pressure at the time, Google severed its cooperation with the project.

However, Google has since reached a $1.2 billion agreement with a certain government to provide it with cloud computing services and AI capabilities.

This year, Google fired dozens of employees after some protested military contracts, and CEO Pichai told employees bluntly: "This is a business."


In 2022, similar employee protests occurred at Amazon, and again, the company did not change its policies.

Double Black Box

With so much money flowing into the defence technology sector, many companies and technologies are operating with little transparency and accountability, researchers warn.

When their products fail unexpectedly, the consequences can be fatal, but these arms dealers are usually not held responsible.

Moreover, the secrecy inherent in the U.S. national security establishment means that companies and the government are under no obligation to disclose details of how these systems work.

When the government adopts secret, proprietary AI technology and subjects it to the secretive world of national security, it creates what University of Virginia law professor Ashley Deeks calls a “double black box.”

In these cases, it is difficult for the public to know whether these systems are operating correctly or ethically, and they often leave a lot of room for error.

“I’ve seen a lot of hype about AI in the business world, and the word ‘AI’ is being abused everywhere,” said Scharre of the Center for American Security think tank. “Once you get under the hood, it may not be as sophisticated as advertised.”


Activists protest in front of the Brandenburg Gate in Berlin, Germany, demanding "Stop Killer Robots"

Human in the loop

While companies and national militaries are reluctant to reveal specific details about how their systems work, they do participate in many debates about the ethical responsibilities and regulation of AI systems.

For example, diplomats and arms dealers generally believe that there should always be "human involvement" in the decision-making process rather than being completely controlled by machines.

Yet there is little consensus on how to implement human oversight.

“Everyone can identify with the concept, but at the same time everyone has a different idea of ​​what it means in practice,” said Rebecca Crootof, a law professor and autonomous warfare expert at the University of Richmond and DARPA’s first visiting scholar.

“In terms of actually guiding technical design decisions, the concept is not that useful.”


Protesters gather outside the Elbit System factory in Leicester, UK

Furthermore, the complexities of human psychology and accountability add further complications to high-level discussions of “human in the loop.”

An example researchers often cite is self-driving cars, where humans must take back control of the vehicle when necessary to achieve "human involvement."

But if a self-driving car makes a mistake, or causes a human to make a bad decision, is it fair to blame the driver?

To be more specific, if a self-driving car hands over control to a human a few seconds before a crash, who is responsible in this situation?

Scharre of the Center for American Security think tank pointed out an interesting thing: we sometimes put humans in the cockpit so that when something goes wrong we can find someone to take responsibility, which is called a "moral buffer."

How to regulate? There are different opinions

At a conference in Vienna in late April, international organizations and diplomats from 143 countries gathered to discuss regulating the use of artificial intelligence and autonomous weapons in warfare.

For many years, the UN Security Council has failed to reach any comprehensive treaty on this issue.

Compared to a total ban on autonomous weapons, Austrian Foreign Minister Alexander Schallenberg’s call was much more moderate - "At least we should make the most far-reaching and important decisions - who lives and who dies will still be in the hands of humans, not machines."


The International Red Cross and the Organization to Stop Killer Robots have been calling for a ban on certain types of autonomous weapons systems for more than a decade.

“It’s very worrying that we’re seeing a lot of money being invested in technologies like autonomous weapons and AI targeting systems,” said Catherine Connolly, manager of Stop Killer Robots.

Today, the situation is becoming more urgent.

Arms control advocates acknowledge that time is running out for regulation.

“We used to call for a preventive ban on fully autonomous weapons systems, but now we no longer use the word ‘preventive’ because we are so close to autonomous weapons,” said Mary Wareham, deputy director of the organization’s crisis, conflict and armaments department.

Calls for increased regulation have been opposed by the United States and other countries as well as arms dealers.

Luckey, Anduril’s founder, has made vague promises about keeping “humans involved” in his company’s technology but has publicly opposed regulation and bans on autonomous weapons.

Palantir CEO Alex Karp has mentioned many times that we have reached the Oppenheimer moment.


An AI-integrated drone is clearing mines

Experts say this lack of regulation is not unique to autonomous weapons, but a problem facing the international legal system in general.

But many worry that once these technologies are developed and integrated into the military, they will become long-lasting and more difficult to regulate.

“Once a weapon is used in the military, it’s harder to give it up because they’ve become dependent on it,” said Scharre of the Center for American Security think tank. “It’s no longer just a financial investment.”

If the development of autonomous weapons and AI is anything like other military technologies, their use will likely trickle down to domestic law enforcement and border patrol agencies, further entrenching the technology.

“A lot of times, technology used in war ends up coming back home,” Connolly said.

References:

https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons-enter-the-battlefield