Elon Musk wants to pause ‘dangerous’ AI development. Bill Gates disagrees—and he’s not the only one

Wealth

If you’ve heard a lot of pro-AI chatter in recent days, you’re probably not alone.

AI developers, prominent AI ethicists and even Microsoft co-founder Bill Gates have spent the past week defending their work. That’s in response to an open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month halt to work on AI systems that can compete with human-level intelligence.

The letter, which now has more than 13,500 signatures, expressed fear that the “dangerous race” to develop programs like OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Alphabet’s Bard could have negative consequences if left unchecked, from widespread disinformation to the ceding of human jobs to machines.

But large swaths of the tech industry, including at least one of its biggest luminaries, are pushing back.

“I don’t think asking one particular group to pause solves the challenges,” Gates told Reuters on Monday. A pause would be difficult to enforce across a global industry, Gates added — though he agreed that the industry needs more research to “identify the tricky areas.”

That’s what makes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solution seems impossible to achieve.

Here’s why, and what could happen next — from government regulations to any potential robot uprising.

What are Musk and Wozniak concerned about?

The open letter’s concerns are relatively straightforward: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

AI systems often come with programming biases and potential privacy issues. They can widely spread misinformation, especially when used maliciously.

And it’s easy to imagine companies trying to save money by replacing human jobs — from personal assistants to customer service representatives — with AI language systems.

Italy has already temporarily banned ChatGPT over privacy issues stemming from an OpenAI data breach. The U.K. government published regulation recommendations last week, and the European Consumer Organisation called on lawmakers across Europe to ramp up regulations, too.

In the U.S., some members of Congress have called for new laws to regulate AI technology. Last month, the Federal Trade Commission issued guidance for businesses developing such chatbots, implying that the federal government is keeping a close eye on AI systems that can be used by fraudsters.

And multiple state privacy laws passed last year aim to force companies to disclose when and how their AI products work, and give customers a chance to opt out of providing personal data for AI-automated decisions.

Those laws are currently active in California, Connecticut, Colorado, Utah and Virginia.

What do AI developers say?

At least one AI safety and research company isn’t worried yet: Current technologies don’t “pose an imminent concern,” San Francisco-based Anthropic wrote in a blog post last month.

Anthropic, which received a $400 million investment from Alphabet in February, does have its own AI chatbot. It noted in its blog post that future AI systems could become “much more powerful” over the next decade, and building guardrails now could “help reduce risks” down the road.

The problem: Nobody’s quite sure what those guardrails could or should look like, Anthropic wrote.

The open letter’s ability to prompt conversation around the topic is useful, a company spokesperson tells CNBC Make It. The spokesperson didn’t specify whether Anthropic would support a six-month pause.

In a Wednesday tweet, OpenAI CEO Sam Altman acknowledged that “an effective global regulatory framework including democratic governance” and “sufficient coordination” among leading artificial general intelligence (AGI) companies could help.

But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, didn’t specify what those policies might entail, or respond to CNBC Make It’s request for comment on the open letter.

Some researchers raise another issue: Pausing research could stifle progress in a fast-moving industry, and allow authoritarian countries developing their own AI systems to get ahead.

Highlighting AI’s potential threats could encourage bad actors to embrace the technology for nefarious purposes, says Richard Socher, an AI researcher and CEO of AI-backed search engine startup You.com.

Exaggerating the immediacy of those threats also feeds unnecessary hysteria around the topic, Socher says. The open letter’s proposals are “impossible to enforce, and it tackles the problem on the wrong level,” he adds.

What happens now?

The muted response to the open letter from AI developers seems to indicate that the tech giants and startups alike are unlikely to voluntarily halt their work.

The letter’s call for increased government regulation appears more likely, especially since lawmakers in the U.S. and Europe are already pushing for transparency from AI developers.

In the U.S., the FTC could also establish rules requiring AI developers to only train new systems with data sets that don’t include misinformation or implicit bias, and to increase testing of those products before and after they’re released to the public, according to a December advisory from law firm Alston & Bird.

Such efforts need to be in place before the tech advances any further, says Stuart Russell, a Berkeley University computer scientist and leading AI researcher who co-signed the open letter.

A pause could also give tech companies more time to prove that their advanced AI systems don’t “present an undue risk,” Russell told CNN on Saturday.

Both sides do seem to agree on one thing: The worst-case scenarios of rapid AI development are worth preventing. In the short term, that means providing AI product users with transparency, and protecting them from scammers.

In the long term, that could mean keeping AI systems from surpassing human-level intelligence, and maintaining an ability to control it effectively.

“Once you start to make machines that are rivalling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” Gates told the BBC back in 2015. “It’s just an inevitability.”

DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

Take this survey and tell us how you want to take your money and career to the next level.

Articles You May Like

From Nike to Intel, CEO departures at U.S. companies hit a record this year
Last-Minute Gift (For A Lifetime) Idea: A Child IRA For Your Kids Or Grandkids
The Fed cut interest rates but mortgage costs jumped. Here’s why
Student loan servicer transfer led to ‘millions of consumer credit reporting errors’: Lawmakers
Banking app Dave, back from the brink, is this year’s biggest gainer among financials with 934% surge

Leave a Reply

Your email address will not be published. Required fields are marked *