Technology

In this article

Microsoft seen on mobile with ChatGPT 4 on screen, seen in this photo illustration. On 15 March 2023 in Brussels, Belgium. 
Jonathan Raa | Nurphoto | Getty Images

BSA, a tech advocacy group backed in part by Microsoft, is advocating for rules governing the use of artificial intelligence in national privacy legislation, according to a document released on Monday.

BSA represents business software companies like Adobe, IBM and Oracle. Microsoft is one of the leaders in AI due to its recent investment in OpenAI, the creator of the generative AI chatbot ChatGPT. But Google, the other key U.S. player in advanced AI at the moment, is not a member.

The push comes as many members of Congress, including Senate Majority Leader Chuck Schumer, D-N.Y., have expressed interest and urgency in making sure regulation keeps pace with the quick development of AI technology.

The group is advocating for four key protections:

  • Congress should make clear requirements for when companies must evaluate the designs or impact of AI.
  • Those requirements should kick in when AI is used to make “consequential decisions,” which Congress should also define.
  • Congress should designate an existing federal agency to review company certifications of compliance with the rules.
  • Companies should be required to develop risk-management programs for high-risk AI.

“We’re an industry group that wants Congress to pass this legislation,” said Craig Albright, vice president of U.S. government relations at BSA. “So we’re trying to bring more attention to this opportunity. We feel it just hasn’t gotten as much attention as it could or should.”

“It’s not meant to be the answer to every question about AI, but it’s an important answer to an important question about AI that Congress can get done,” Albright said.

The introduction of accessible advanced AI tools like ChatGPT has accelerated the push for guardrails on the technology. While the U.S. has created a voluntary risk management framework, many advocates have pushed for even stronger protections. In the meantime, Europe is working to finalize its AI Act, creating protections around high-risk AI.

Albright said as Europe and China push forward with frameworks to regulate and foster new technologies, U.S. policymakers need to ask themselves whether digital transformation is “an important part of an economic agenda.”

“If it is, we should have a national agenda for digital transformation,” he said, which would include rules around AI, national privacy standards and robust cybersecurity policy.

In messaging outlining suggestions for Congress, which BSA shared with CNBC, the group suggested that the American Data Privacy and Protection Act, the bipartisan privacy bill that passed out of the House Energy and Commerce Committee last Congress, is the right vehicle for new AI rules. Though the bill still faces a steep road ahead to becoming law, BSA said it already has the right framework for the sort of national AI guardrails the government should put in place.

BSA hopes that when the ADPPA is reintroduced, as many anticipate, it will contain new language to regulate AI. Albright said the group has been in contact with the House Energy and Commerce Committee about their suggestions and the committee has had an “open door” to many different voices.

A representative for the House E&C did not immediately respond to a request for comment.

While ADPPA still faces obstacles to becoming law, Albright said that passing any piece of legislation involves a heavy lift.

“What we’re saying is, this is available. This is something that can reach agreement, that can be bipartisan,” Albright said. “And so our hope is that however they’re going to legislate, this will be a part of it.”

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?