Summary of White House orders federal agencies to implement AI safeguards, councils

  • therecord.media
  • Article
  • Summarized Content

    Biden Administration Unveils AI Requirements for Federal Agencies

    The Biden administration has released a new policy aimed at strengthening the governance, safety, and security aspects of artificial intelligence (AI) systems used by federal agencies. This policy is designed to address various concerns associated with AI, such as protecting Americans' privacy, advancing equity and civil rights, safeguarding consumers and workers, promoting innovation and competition, and maintaining American leadership in the field of AI.

    Safeguarding Against Algorithmic Discrimination

    By December 1st, federal agencies are required to implement safeguards when utilizing AI systems. These safeguards include testing and monitoring the impact of AI on the public, as well as actively monitoring for algorithmic discrimination. Algorithmic bias has been a significant issue across various sectors, including healthcare, housing, education, criminal justice, and others.

    • The Transportation Security Administration (TSA) must provide travelers the ability to opt out of facial recognition systems used at airports without any delays or inconveniences.
    • In the federal healthcare system, human oversight is mandatory for critical diagnostic decisions made with the aid of AI tools to avoid disparities in healthcare access.
    • When AI is used to detect fraud in government services, human oversight of impactful decisions is required, and affected individuals must have the opportunity to seek remedies for AI-related harms.

    Transparency and Accountability for AI Systems

    The new policy includes measures to promote transparency about the use of AI by federal agencies. Agencies must publicly release an annual inventory of the AI systems they employ, identifying any potential impacts on rights or safety. Additionally, the Biden administration plans to hire 100 AI professionals by the summer of 2024 and allocate $5 million in the fiscal year 2025 budget to expand the General Services Administration's government-wide AI training program.

    Establishing AI Governance Structures

    Federal agencies are required to designate chief AI officers and establish AI governance boards chaired by deputy secretaries. The Office of Management and Budget (OMB) has already convened a Chief AI Officer Council since December. However, only a few agencies, such as the Departments of Defense, Veterans Affairs, Housing and Urban Development, and State, have established these governance bodies. All agencies must comply with this requirement by May 27, 2024.

    Addressing the Lack of AI Regulation

    Experts have expressed concerns about the lack of comprehensive federal regulation for AI in the United States, particularly when compared to the efforts of the European Union. Without a federal law regulating AI, several states have taken it upon themselves to pass laws managing AI usage, leading to compliance challenges for smaller AI vendors and startups.

    • Ilia Kolochenko, CEO at ImmuniWeb, argues that an overarching AI legislation would be beneficial for sustainable innovation and long-term competitiveness of US tech firms in the global AI market.
    • Clar Rosso, CEO of the cybersecurity non-profit ISC2, acknowledges the Biden administration's actions as a step in the right direction but emphasizes the need for collaboration, documentation of successes and failures, and high-quality AI talent.

    Addressing the AI Skills Gap

    One of the critical aspects of the Biden administration's policy is the measure requiring the hiring of 100 AI professionals. According to Clar Rosso, members of ISC2 have reported "extreme concern" over the past year about the AI skills gap. Addressing this gap is crucial for the effective implementation and governance of AI systems within federal agencies.

    The Need for Continued Efforts and Collaboration

    While the Biden administration's policy is a positive step towards addressing various concerns surrounding AI, experts emphasize the need for continued efforts and collaboration. Key areas that require further attention include:

    • Developing and implementing robust tests or benchmarks to assess the fairness and lack of bias in AI/LLM models before their release.
    • Determining the appropriate frequency and methods for applying these tests or benchmarks.
    • Fostering collaboration among stakeholders, including policymakers, technology experts, civil rights organizations, and the public, to ensure the responsible and ethical development and deployment of AI systems.

    Ask anything...

    Sign Up Free to ask questions about anything you want to learn.