Last week, the White House issued an update on President Biden’s October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “AI EO” or “EO”). The update detailed the progress made on the EO directives, including among others, using the Defense Production Act to require AI companies to make specific reports on their AI systems to the government and proposing a rule that would require cloud companies to report foreign use of their services to train AI models and verify the identities of foreign customers. As efforts to address AI heat up on the federal and state level, it is a good time to take stock of what has happened and what is around the corner.
First, a refresh on what the AI EO covers. The EO espouses eight core principles: 1) ensuring that AI is safe and secure; 2) promoting responsible innovation, competition, and collaboration; 3) ensuring responsible development and use of AI; 4) ensuring equity and civil rights in AI policies; 5) protecting the interests of the American people; 6) protecting privacy and civil liberties; 7) internally regulating the federal government’s use of AI; and 8) working towards global societal, economic, and technological progress. These principles address a variety of concerns and opportunities surrounding cybersecurity risks, national security dangers, AI-generated content, investment in AI education, novel intellectual property questions, AI-related bias and discrimination, and data privacy, among many more.
The AI EO sets forth directives and action items to be implemented according to the EO’s deadlines.
- President Biden used the Defense Production Act to compel companies developing powerful AI systems to report specific information on the AI systems, their training, and safety test results to the Department of Commerce (DOC).
- The DOC proposed a draft rule that would require U.S. companies that provide cloud services to alert the government when foreign companies or clients use their resources to train AI models and is accepting comments until April 2024. The rule would further require the companies to make sure that foreign resellers of their products verify the identity of any foreign customers and would give the DOC the power to restrict access to U.S. cloud services by certain foreign nationals or people in certain foreign jurisdictions.
- Nine agencies with authority over critical infrastructure, like the Departments of Defense (DOD), Transportation (DOT), Treasury, and Health and Human Services (HHS), conducted risk assessments on the use of AI in various infrastructure sectors which can form the basis for further federal actions.
- The National Science Foundation (NSF) launched a pilot program working towards providing a national infrastructure for delivering computing power, data, software, access to open and proprietary AI models and other AI training resources to researchers and students (the National AI Research Resource), launched an initiative to help fund AI education opportunities (EducateAI), and funded regional-scale innovation systems focused on AI (NSF Engines).
- An AI and Tech Talent Task Force set up under the AI EO launched an AI Talent Surge to accelerate hiring AI professionals across the federal government.
- The Office of Management and Budget (OMB) convened an interagency council to coordinate federal agencies’ use of AI.
- In December, the comment period closed on the OMB draft guidance that, when final, will direct federal agencies to each designate a Chief AI Officer and outline governance and risk management procedures for new or existing AI that is developed, used, or procured by or on behalf of covered agencies.
- On February 2, 2023, the period ended for submitting responses for the National Institute of Standards and Technology’s (NIST’s) Request for Information (RFI) related to AI red-teaming (controlled and structured efforts to find flaws and vulnerabilities in AI systems), generative AI risk management, and reducing the risk of synthetic content (text or media modified or generated by algorithms/AI), among others, to support its efforts to develop guidance pursuant to the EO.
- NIST also established an Artificial Intelligence Safety Institute and a related ‘Consortium’ for collaborative efforts in various, specific areas related to AI.
What’s Coming Up?
We expect to see a variety of guidance, initiatives, and recommendations from government agencies in the coming months. These will include best practices for financial institutions to manage cybersecurity risks and for healthcare providers to capture errors resulting from AI use, guidance for federal contractors on using AI in hiring, and changes to visa processes to facilitate recruiting and retaining AI talent, among many others related to AI safety, equity and nondiscrimination, innovation, education, privacy and more. The timeline proposed by the AI EO to meet its directives is set forth here.
Of particular note are directives to address the cybersecurity and discrimination risks posed by AI:
AI Safety and Security (including National Security)
- By March 2024, the Treasury Department will issue a report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- The DOC will develop guidance on tools and practices to detect and authenticate synthetic content by April 2024.
- NIST will develop guidelines, best practices and initiatives for AI safety and security.
- The Department of Energy (DOE) will develop tools to evaluate AI capabilities that may present nuclear, nonproliferation, biological, chemical, critical infrastructure, or energy-security threats.
- The Department of Homeland Security (DHS) will
- develop pilot programs to use AI to improve cyber defenses, along with the DOD.
- report to the President on AI and Chemical, Biological, Radiological and Nuclear (CBRN) threats.
- develop a program to mitigate AI-related risks in IP.
Equity, Civil Rights, Nondiscrimination, and Public Benefits
- The Department of Labor (DOL) will, by October 2024, publish guidance for federal contractors on nondiscrimination in hiring involving AI.
- The HHS will publish a plan to assess whether AI systems can achieve equitable and just outcomes.
- The Department of Agriculture will issue guidance on the use of AI in implementing the Department’s public benefits programs and for providing customer support for those programs.
- The Department of Housing and Urban Development will issue guidance on using AI to screen tenants.
- The Attorney General will, by October 2024,
- submit a report to the President addressing the use of AI in the criminal justice system and
- reassess the government’s ability to investigate cases where law enforcement’s use of AI resulted in deprivation of people’s rights under the Constitution or US laws, including by improving and increasing training for federal law enforcement officers and prosecutors.
Other directives include:
Recruiting and Retaining AI Talent
- Various departments and agencies will take action to attract and retain talent in AI and other critical emerging technologies, including:
- the Department of State (DOS) and the DHS making changes to visa processes to make it easier for students and professionals in AI to come to and stay in the US;
- the Office of Personnel Management (OPM) issuing guidance for pay flexibilities or incentive pay programs for AI positions; and
- an interagency group sharing best practices for recruiting law enforcement professionals to train others on responsible use of AI.
Innovation and Education
- The DOE will establish a pilot program to enhance existing training programs for scientists and train new researchers.
- The United States Patent and Trademark Office will, by February 2024,
- publish guidance on the intersection of AI and IP, addressing the use of AI and inventorship along with other considerations, and
- recommend potential executive actions related to copyright and AI to the President.
- The Department of Education will, by October 2024, develop guidance on safe, responsible, and nondiscriminatory use of AI in education.
- The National Science Foundation will
- fund the creation of a Research Coordination Network to advance privacy research and develop, deploy and scale privacy enhancing technologies (technology used to mitigate privacy risks, like encryption or secure multiparty computation) and
- work with federal agencies to incorporate privacy enhancing technologies into their operations.
Internal Regulation of the Federal Government
- The OMB will
- issue guidance relating to AI innovation and risk management in the Federal Government.
- develop a method for agencies to track and assess their ability to adopt AI, manage AI-related risks, and comply with federal AI policy.
- issue guidance to agencies for labeling and authenticating official US government digital content.
- The OPM will
- establish guidance on hiring AI, data, and technology talent.
- develop guidance for the Federal Government’s use of generative AI.
- The AI and Technology Talent Taskforce will track AI capacity in the Federal Government
- The DOE will work towards the development of AI tools to mitigate climate change risks.
- The DOL will analyze how agencies can support workers being displaced by AI and publish best practices on how employers can mitigate the potential harm AI has on employees.
- The APRA-I, an agency within the Department of Transportation, will explore AI opportunities and challenges in transportation, including autonomous vehicles.
- The DOC will seek input from the private sector, academia, civil society, and other stakeholders on dual-use foundation models, defined by the EO as AI models that are trained on broad data, generally use self-supervision, and contain tens of billions of parameters, among others.
- The HHS will develop strategies for future rulemaking and guidance on regulating AI in drug development and will develop recommendations, best practices and informal guidance on identifying and capturing clinical errors resulting from AI use for appropriate stakeholders, including healthcare providers.
These are only some of the various directives set forth in the EO meant to be implemented over the course of the next year and a half. However, in recent months, things have been happening outside the orbit of this EO too. The HHS, for example, established an AI Office in early 2021, well before this EO, and has been publishing AI use cases as well. The district court in Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823 (D.C. 2023) considered the issue of AI in an intellectual property case, determining that human authorship is central to copyright and that works created entirely autonomously by AI cannot be copyrighted. The AI EO, therefore, will likely serve to supplement actions and considerations already in play in the AI landscape.
While the EO deals with AI at a federal level, there has been executive action at the state level too, with the governors of Virginia, California, Maryland, Pennsylvania, Oregon, Wisconsin, Oklahoma, New Jersey, and Washington each signing their own executive orders on artificial intelligence. The Oregon, Wisconsin, Oklahoma, and New Jersey executive orders simply create a task force or council to make recommendations on AI policy. Maryland created an AI Subcabinet of the Governor’s Executive Council to make recommendations as well, but also tasked the subcabinet with finding and offering AI training programs for state workers and developing and implementing comprehensive action plans to operationalize the state’s AI principles. Washington, Pennsylvania, and California’s executive orders all focus on generative AI, with Pennsylvania’s also creating a governing board to make recommendations on generative AI policy. Virginia’s executive order differs slightly from the rest by setting out standards for the responsible, ethical and transparent use of AI by government agencies and acceptable technological standards for the use of AI by those agencies, as well as guidelines for the use of AI in education. The Virginia order also creates a task force to provide ongoing recommendations on the implementation of those standards.
If you have legal questions regarding artificial intelligence or the AI Executive Order, please reach out to your specific Moore & Van Allen (MVA) contact or to a member of the MVA Privacy & Data Security group.
About Data Points: Privacy & Data Security Blog
The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.