Microsoft urges AI rules to minimize risk

Microsoft endorsed a series of artificial intelligence regulations on Thursday as the company grapples with concerns from governments around the world about the risks of the rapidly evolving technology.

Microsoft, which has pledged to incorporate artificial intelligence into many of its products, has proposed regulations, including a requirement that systems used in critical infrastructure can be fully shut down or slowed down, similar to an emergency braking system on a train. The company has also called for laws to clarify when additional legal obligations apply to an AI system and for labels that make it clear when an image or video was produced by a computer.

“Companies need to step up,” Microsoft chairman Brad Smith said in an interview about the regulatory push. “The government needs to act faster.”

The regulatory filing punctuates a boom in AI, with the launch of the ChatGPT chatbot in November generating a wave of interest. Since then, companies like Microsoft and Google’s parent company Alphabet have rushed to incorporate the technology into their products. This has fueled concern that companies are sacrificing security to catch the next big thing before their competitors.

Lawmakers have publicly expressed concerns that such AI products, which can generate text and images on their own, create a flood of misinformation, are used by criminals and leave people unemployed. Regulators in Washington have pledged to keep an eye out for AI scammers and instances where the systems perpetuate discrimination or make decisions that violate the law.

In response to this scrutiny, AI developers have increasingly called for some of the responsibility for policing the technology to be transferred to the government. Sam Altman, chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.

The move echoes calls for new privacy or social media laws from internet companies like Google and Facebook parent Meta. In the United States, lawmakers have been slow to act on such calls, with little new federal rules on privacy or social media in recent years.

In the interview, Smith said that Microsoft wasn’t trying to shrug off responsibility for managing the new technology, because it was offering specific ideas and committing to carry out some of them, whether or not the government acted.

There is not an iota of abdication of responsibility,” he said.

He endorsed the idea, supported by Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” AI models.

“That means you notify the government when you start testing,” Smith said. “We need to share the results with the government. Even when it is licensed for deployment, you have a duty to continue to monitor it and report to the government if unexpected issues arise.”

Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said such high-risk systems should only be allowed to operate in “licensed AI data centers”. Smith acknowledged that the company would not be “badly positioned” to offer such services, but said that many US competitors could also provide them.

Microsoft added that governments should designate certain AI systems used in critical infrastructure as “high risk” and require that they have a “safety brake”. He compared this feature to “the braking system engineers long incorporated into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, Microsoft said, companies providing AI systems must know certain information about their customers. To protect consumers from fraud, AI-created content should have a special label, the company said.

Smith said companies must bear the legal “liability” for damages associated with AI. In some cases, he said, the responsible party might be the developer of an application like Microsoft’s Bing search engine, which uses someone else’s underlying AI technology. Cloud companies can be responsible for complying with security regulations and other rules, he added.

“We don’t necessarily have the best information or the best answer, or we may not be the most credible speaker,” Smith said. “But, you know, right now, especially in Washington DC, people are looking for ideas.”

Leave a Reply

Your email address will not be published. Required fields are marked *