On Wednesday, top AI firms announced plans to establish an industry-led group to determine safety standards for the quickly evolving technology, leapfrogging Washington lawmakers who are still debating whether the United States government needs its own AI regulator.
In response to rising calls for regulatory oversight, ChatGPT developer OpenAI, Microsoft, Google, and Anthropic have announced the Frontier Model Forum, a coalition that draws on the expertise of member companies to develop technical evaluations and benchmarks, as well as to promote best practices and standards.
“It is critical that AI companies — particularly those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” said Anna Makanju, OpenAI’s vice president of global affairs, in a statement.
Friday, firms voluntarily promised to the White House that they would submit their systems for independent testing and create capabilities to notify the public when an image or video had been generated by artificial intelligence.
The forum’s focus is on what OpenAI has called “frontier AI,” or highly developed AI and machine learning models that pose “severe risks to public safety.” To paraphrase their argument, “dangerous capabilities can arise unexpectedly,” making it difficult to prevent models from being hijacked, which is why such models present a distinct regulatory issue.
Despite having only four members at the moment, the Frontier Model Forum claims to be accepting new recruits. To qualify, a company needs to be working on cutting-edge AI models and putting them into production while also demonstrating a “strong commitment to frontier model safety.”
While the Frontier Model Forum’s stated goal is to show that the AI industry is taking safety concerns seriously, it also reveals Big Tech’s aim to ward off impending regulation through voluntary initiatives and potentially even begin crafting its own regulations.