About | HeinOnline Law Journal Library | HeinOnline Law Journal Library | HeinOnline

GAO-24-107292 1 (2024-03-11)

handle is hein.gao/gaopzo0001 and id is 1 raw text is: 
                    Science, Technology  Assessment,
                    and  Analytics



SCIENCE   & TECH   SPOTLIGHT:

COMBATING DEEPFAKES
GAO-24-107292, March, 2024


Malicious use of deepfakes could erode trust in elections,
spread disinformation, undermine national security, and
empower  harassers.




    Current deepfake detection technologies have limited
    effectiveness in real-world scenarios.

    Watermarking  and other authentication technologies may
    slow the spread of disinformation but present challenges.

 )  Identifying deepfakes is not by itself sufficient to prevent
    abuses. It may not stop the spread of disinformation,
    even after the media is identified as a deepfake.




What  is it?

Deepfakes  are videos, audio, or images that have been
manipulated using artificial intelligence (AI), often to create,
replace, or alter faces or synthesize speech. They can seem
authentic to the human eye and ear. They have been
maliciously used, for example, to try to influence elections and to
create non-consensual pornography. To combat such abuses,
technologies can be used to detect deepfakes or enable
authentication of genuine media.

Detection technologies  aim to identify fake media without
needing to compare it to the original, unaltered media. These
technologies typically use a form of Al known as machine
learning. The models are trained on data from known real and
fake media. Methods include looking for (1) facial or vocal
inconsistencies, (2) evidence of the deepfake generation
process, or (3) color abnormalities.


Authentication technologies  are designed to be embedded
during the creation of a piece of media. These technologies aim
to either prove authenticity or prove that a specific original piece
of media has been altered. They include:

      Digital watermarks can be embedded  in a piece of
      media, which can help detect subsequent deepfakes. One
      form of watermarking adds pixel or audio patterns that are
      detectable by a computer but are imperceptible to
      humans. The  patterns disappear in any areas that are
      modified, enabling the owner to prove that the media is an
      altered version of the original. Another form of
      watermarking adds features that cause any deepfake
      made  using the media to look or sound unrealistic.
      Metadata-which   describe the characteristics of data in a
      piece of media-can be embedded   in a way that is
      cryptographically secure. Missing or incomplete metadata
      may indicate that a piece of media has been altered.
      Blockchain. Uploading media and metadata  to a public
      blockchain creates a relatively secure version that cannot
      be altered without the change being obvious to other
      users. Anyone could then compare a file and its metadata
      to the blockchain version to prove or disprove authenticity.


Sources; GAG analysis (data); let a  r~anch esnet'rtoc ad obe -om (images). IGAO-24-107292


GAO-24-107292 Combating Deepfakes


Page 1

What Is HeinOnline?

HeinOnline is a subscription-based resource containing thousands of academic and legal journals from inception; complete coverage of government documents such as U.S. Statutes at Large, U.S. Code, Federal Register, Code of Federal Regulations, U.S. Reports, and much more. Documents are image-based, fully searchable PDFs with the authority of print combined with the accessibility of a user-friendly and powerful database. For more information, request a quote or trial for your organization below.



Contact us for annual subscription options:

Already a HeinOnline Subscriber?

profiles profiles most