Lamenting the lack of regulatory action for social media before it could wreak widespread social havoc, the Biden administration has promised to move fast and get things right with the burgeoning AI industry. A new executive order shows promise in tackling the multiple facets of harm AI could cause, but many elements are kicked down the road either for federal agencies to develop or for Congress to act on.
The federal government only has so much power in dictating terms via executive order, particularly when private industry is involved. But at least a few of the order’s terms can go into development immediately via use of the Defense Production Act, which covers production of private goods that can be used as weapons of war.
“Far-reaching” executive order addresses consumer rights
Many of the executive order’s terms direct a particular agency to develop tools or guidance related to AI issues. For example, the National Science Foundation will be heading up a network to develop and promote tools to improve cryptography in the face of AI advancements. Assorted federal agencies will also be looking at new restrictions on accessing the stashes of personal information that data brokers hoard, out of concern about how AI might be misapplied or manipulated in connection with these troves.
The Biden administration has also made civil rights a central focus from the beginning, and considerations of this nature are included in the form of new training for the Department of Justice (and other agencies) in handling AI issues. The real estate market may also be looking at new guidelines for the use of AI in screening tenants and processing applications.
Other protective programs appear to be forthcoming from the Department of Labor and the Department of Health and Human Services, among others. But the administration has not lost sight of the technological leaps forward that AI promises (or the economic windfall for pioneers in the space). To that end, the executive order also calls for establishment of a National AI Research Resource system for developers and assistance for entrepreneurs, particularly smaller businesses.
AI security and threat concerns at the forefront
Something that will likely see quick rollout is a requirement for the leaders in the AI field (such as OpenAI and Google) to share information about their training and security tests with the federal government. This should apply (but not be limited to) AI systems that could impact critical infrastructure or play a role in biological or chemical development.
The ability of AI to supercharge deepfakes also appears to be a primary concern of the administration. Deepfakes are usually quite readily spotted by the untrained naked eye at the moment, but AI learning algorithms underpin their creation and advances in the field will also mean more realistic fake videos and audio. A new program from the Department of Commerce will seek to develop watermarks to be applied to AI-generated content, something at present planned to be voluntary for private companies but required for official government materials.
The executive order is timely, as even the relatively limited ChatGPT and competitors have already been broadly adopted at all types of organizations and are already causing harm. Oversharing of sensitive company secrets and personal information with AI is already incredibly common, and scammers have already moved into the space in full force with potentially malicious tools (such as alleged “AI-generated content detectors” that do not actually work reliably).