Microsoft CEO Satya Nadella is optimistic about the future of artificial intelligence, as long as tech companies, politicians, law enforcement and society at large can find some common ground on its limits.

“It is about global, societal convergence on certain norms,” Nadella said in an interview with ​​”NBC Nightly News” anchor Lester Holt. “Especially when you have law and law enforcement and tech platforms that can come together, I think we can govern a lot more than we give ourselves credit for.”

Tune in to “NBC Nightly News with Lester Holt” tonight at 6:30 p.m. ET/5:30 p.m. CT for more on this story.

Speaking Friday in New York, Nadella touched on a wide variety of topics related to AI, including his own company’s challenges and the broader issue of AI-generated fake images. When asked about the recent spread of nonconsensual, sexually explicit deepfakes of Taylor Swift, Nadella called the situation “alarming and terrible.”

There’s a lot being done and a lot still yet to be done, he said, to place guardrails around this type of technology.

“We have to act,” Nadella said. “And quite frankly all of us in the tech platform, irrespective of what your standing on any particular issue is, I think we all benefit when the online world is a safe world.”

Nadella, who became CEO of the tech giant in 2014 and its chairman in 2021, has overseen a return to Microsoft’s former glory days. Once best known for selling software, it has transitioned into a cloud computing and business services giant, competing with Apple for the title of the world’s most valuable public company.

Microsoft has also invested heavily in AI. In January, it announced a “multiyear, multibillion dollar investment” in OpenAI, which created ChatGPT and has been a leader in generative AI that can take simple text prompts and create various types of media. That deal prompted an antitrust probe by the U.S. Federal Trade Commission and has drawn similar scrutiny from E.U. and U.K. officials.

Concerns about AI have also been raised in relation to the upcoming U.S. election, as well as the many elections around the world this year. A string of recent deepfakes have already fooled some internet users by using digitally replicated images and voices of celebrities, including Swift, Scarlett Johansson and the popular influencer MrBeast, to make it look like they’re advertising a certain product.

Efforts to spread disinformation online are not new. But as the country enters the first major election cycle with the potential to be heavily influenced by the capabilities of generative AI, some worry about how prepared tech platforms are to detect and properly label the anticipated influx of political deepfakes.

Nadella said the technology is there when it comes to tackling disinformation and misinformation.

“Then the question, again, comes back to: How do we build consensus between parties, candidates, and the norms around what is acceptable or not acceptable?” he said.

Nadella said social media platforms could approach the AI challenge the same way they’ve dealt with past problems. He pointed to the cooperation between tech companies and law enforcement in cracking down on botnets, which are networks of malware-infected computer devices used by malicious actors to execute scams and cyberattacks.

Like many of its fellow tech giants, Microsoft launched its own AI chatbot last year, originally known as Bing Chat before being rebranded as Copilot. Soon after its initial rollout in February 2023, users noticed the chatbot exhibited some belligerent tendencies — such as threatening an Associated Press reporter and disparaging their physical appearance.

To Nadella, generative AI is more a tool than it is a danger. He said he views the technology as an assistant to human workflows, hence why the chatbot is named “Copilot” and not “Autopilot.”

“This is not about replacing the human in the loop,” he said. “In fact, it’s about empowering the human.”

Still, workers across creative industries have expressed concerns about generative AI technologies borrowing from human talent without permission.

In November, a group of artists filed an amended copyright lawsuit against several companies that specialize in AI-generated art. Among them was Midjourney, which was alleged to have used the work of thousands of artists, dead and alive, to train its AI program.

The New York Times also filed a federal lawsuit in December against Microsoft and OpenAI, alleging the companies used its copyrighted articles to train their chatbots and seeking billions of dollars in damages.

“I think one of the things that is going to be very, very important is both what is the copyright protection, as well as what is fair use, in a world where there is transformative new technology,” Nadella told Holt when asked about the lawsuit. “I think that that’s really where the copyright laws have to essentially now be interpreted for what is a new transformation technology.”

Source: | This article originally belongs to Nbcnews.com

You May Also Like

Earth just had its hottest year ever recorded — by far

Last year was Earth’s hottest in recorded history, the European Union’s climate…

Caterpillar Profit Soars, Merck Hit by Pandemic, Twitter Results to Come

Photo: Luke Sharrett/Bloomberg News April 29, 2021 7:10 am ET It is…

A Breakdown of Paydays at America’s Biggest Companies

This copy is for your personal, non-commercial use only. Distribution and use…

Supreme Court opens new term with murky wetlands dispute and Justice Jackson on the bench

WASHINGTON — The Supreme Court, with Justice Ketanji Brown Jackson on board…