The administration of US President Joe Biden, including Biden himself, has done its homework and used the available material to prepare the first significant regulation relating to the dependability and safety of AI technologies.
The president should now issue an executive order representing a binding act for manufacturers (but also government) users of AI technologies. It will express all previous warnings against the dark side of AI, but also ways to avoid them.
This task is by no means simple, but considerable time pressure plays a significant role and obliges President Biden to pass the executive order soon.
“This is not an area that you can take years to get your head around or regulate. You’ve got to measure time in weeks. Speed is really important here”, said Jeff Zients, President Biden's chief of staff, last June.
As one of the key members of the presidential administration who conducts extensive consultations on future regulations, Mr. Zients aims for a speed that he personally executes.
Tech giants embrace the rules
President Biden's recent meeting with the managers of the 7 top US AI companies, which resulted in a non-binding agreement on controlling the risks of AI production, was a significant step towards a still lacking executive decision.
Giants in the development of AI are required to abide by a set of standards, including external testing of systems prior to their release and inserting a watermark on AI-generated content from the start to the finish of its production to prevent users from being misled.
There is also the obligation of manufacturers to investigate potential risks regarding their products, such as bias, discrimination and endangering privacy, and regularly report to the public about the capabilities of AI and its limitations.
Leaders in AI - Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI - have been content to work with President Biden and his team on these safeguards.
They requested the administration take action regarding AI-generated content regulation. They are aware that their development plans are associated with the rising scepticism of users and a burden on their business plans.
Profit will be more significant than voluntary agreement
The White House agreement is optional and non-binding while being a significant step that has demonstrated the government and the leaders of the AI industry's willingness to move in the same direction.
It came as the culmination of extensive consultations that the administration conducted not only with industry representatives, but perhaps even more so with actors outside the tech giants, amongst experts, academics, and the civil sector dealing with privacy protection and technology development.
However, the agreement could not be the only support for regulating all aspects of the production and placement of AI-generated platforms and avoiding the numerous risks they carry.
Companies will be guided by their commercial and development interests in designing AI-platforms, whereby the promise given to the president does not always have to take precedence in every situation.
They accepted the agreement with the US president because they are already implementing part of the obligations they have undertaken before him (voluntarily).
Their cooperation regarding the government's efforts to regulate the AI technology area contributes to their positive image as socially responsible companies.
And ultimately, the 8 guidelines they agreed with the US president actually contribute to the building of a new architecture that will result in binding regulations.
Therefore, it is better for them to participate in making the rules they would have to follow than not to be a part of that process.
The 2024 elections will accelerate rulemaking
Enacting such robust regulation is now truly a matter of time. Expectations are now focused on President Biden, whose administration is drafting an executive order, as a binding document that would actually have an influence on lowering the danger of AI technology abuse because it is improbable that the regulation could be passed in the US Congress in the near future.
The administration is not starting from scratch in drafting this act. The Blueprint for an AI Bill of Rights document, which the administration passed last October, continues to serve as its most important basis. The meeting between the US President and the leaders of the AI industry is merely the latest stage in those preparations.
This serves as a sort of framework for the Biden administration's AI policy, and it is likely that its most significant starting points - particularly those that address the restriction of the space for misinformation and abuse of AI- will be included in the future executive order.
“There are established ways to make sure that the technology we build and deploy can benefit all of us while reducing harms for those who are already buffeted by a deeply unequal society. The time for studying is over now: the White House needs to issue an executive order and take action”, wrote Suresh Venkatasubramanian, of Brown University and a former adviser in the Biden administration.
The urgency sought by the people who participated in preparing Biden's regulation is partly motivated by a desire to leave a legacy that he initiated, one of the most significant global processes: protection from the side effects of AI technology.
But their urgent appeal for the adoption of regulations is also motivated by the upcoming presidential elections, where their boss sees a chance for a new mandate.
These elections might be the first in history where the risk of manipulating AI-generated content would be high enough to seriously affect the outcome.
It will be a significant catch-up with technological advancement that is still far behind preventing the detrimental side effects of AI technologies if the White House is able to swiftly establish rules to limit their negative impacts and they are implemented in time, before the election campaign.