Highlights: The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

  • PsychedSy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    This will entrench big tech in federal government, but I’m not too worried about limits on the government.

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      8 months ago

      Yeah, the only concern I have so far is the leverage of the defense powers act to require foundational model development to sent red team results to the Fed. That’s a hint that will enable them to ban release of models in the future.