About | HeinOnline Law Journal Library | HeinOnline Law Journal Library | HeinOnline

1 [1] (April 2, 2025)

handle is hein.crs/gnvealing0001 and id is 1 raw text is: 





Congressional Research Service
Inforrning the legislative debate since 1914


                                                                                          Updated April 2, 2025

Generative Artificial Intelligence: Overview, Issues, and

Considerations for Congress


Generative artificial intelligence (GenAI) refers to AI
models, in particular those that use machine learning (ML)
and are trained on large volumes of data, that are able to
generate new content. In contrast, other AI models may
have a primary goal of classifying data, such as facial
recognition image data, or making decisions, such as those
used in automated vehicles. GenAI, when prompted (often
by a user inputting text), can create various outputs,
including text, images, videos, computer code, or music.
The public release of many GenAI tools, and the race by
companies to develop ever-more powerful AI models, have
generated widespread discussion of their capabilities,
potential concerns with their use, and debates about their
governance and regulation. This CRS In Focus describes
the development and uses of GenAI, concerns raised by the
use of GenAI tools, and considerations for Congress.

Background
AI can generally be thought of as computerized systems
that work and react in ways commonly considered to
require human intelligence, such as learning, solving
problems, and achieving goals under uncertain and varying
conditions, with varying levels of autonomy. AI can
encompass a range of technologies, methodologies, and
application areas, such as natural language processing,
robotics, and facial recognition.
The AI technologies underpinning many GenAI tools are
the result of decades of research. For example, recurrent
neural networks (RNNs), a type of ML loosely modeled
after the human brain that detects patterns in sequential
data, underwent much development and improvement in the
1980s-1990s. RNNs  can generate text, but they have limited
ability to retain contextual information across large strings
of words, are slow to train, and are not easily scaled up by
increasing computational power or training data size.
More recent technical advances-notably the introduction
of the Transformer architecture by Google researchers in
2017 and improvements in generative pre-trained
transformer (GPT) models since around 2019-have
contributed to dramatic improvement in GenAI
performance. Transformer models process a sequence of
whole sentences rather than analyzing word by word. They
use mathematical techniques called attention or self-
attention to detect how data elements, even when far away
sequentially, influence and depend on each other. These
methods make  GPT models faster to train, more efficient in
understanding context, and highly scalable.
Other critical components to recent GenAI advances have
been the availability of large amounts of data and the size
of their language models. Large language models (LLMs)


are AI systems that aim to model language, sometimes
using millions or billions of parameters (i.e., numbers in the
model that determine how inputs are converted to outputs).
Repeatedly tweaking these parameters, using mathematical
optimization techniques and large amounts of data and
computational power, increases model performance.
Notably, GenAI models work to match the style and
appearance of the underlying training data. They have also
demonstrated emergent abilities, meaning capabilities that
their developers and users did not anticipate but that are
emerging as the models grow larger.
LLMs  have been characterized as foundation models (also
called general-purpose AI), meaning models trained on
large and diverse datasets that can be adapted to a wide
range of downstream tasks. As described by the Stanford
University Institute for Human-Centered AI, foundation
models may be built on or integrated into multiple AI
systems across various domains (e.g., text-based GPT
models that can perform arithmetic and computer
programming  tasks, which were outside the scope of their
original training). This capability has the potential for both
benefits (e.g., concentrating efforts to reduce bias and
improve robustness) and drawbacks (e.g., security failures
or inequities that flow to downstream applications).

Capabilities and Advances
The increase in size of recent GenAI models (with hundreds
of billions or trillions of parameters) has led to improved
capabilities over previous systems (with millions or a few
billion parameters). According to the AI Index 2024 Annual
Report, LLMs have surpassed human performance on
traditional English-language benchmarks, and the report
argues that the rapid advancement has led to the need for
more comprehensive benchmarks.
Initial GenAI tools tended to excel at single input and
output types-such as text to text for chatbots or text to
image for image generators. More multimodal GenAI
models are available now, meaning they can process and
integrate information from multiple types of data or
modalities simultaneously. For example, Google's Gemini
models can use text-image-audio-video inputs and provide
text-image outputs.
Beginning in late 2024, companies introduced what have
been termed reasoning models-models  that use a chain-of-
thought technique in an attempt to refine their thinking
process, try different strategies, and recognize their
mistakes (e.g., OpenAI's o1 and o3-mini models,
Anthropic's Claude 3.7 Sonnet, and DeepSeek's RI model),
though reasoning models still frequently make mistakes.
Along with the development of large-scale models,


ittps://Crsreports.congress.gov


S

What Is HeinOnline?

HeinOnline is a subscription-based resource containing thousands of academic and legal journals from inception; complete coverage of government documents such as U.S. Statutes at Large, U.S. Code, Federal Register, Code of Federal Regulations, U.S. Reports, and much more. Documents are image-based, fully searchable PDFs with the authority of print combined with the accessibility of a user-friendly and powerful database. For more information, request a quote or trial for your organization below.



Short-term subscription options include 24 hours, 48 hours, or 1 week to HeinOnline.

Already a HeinOnline Subscriber?

profiles profiles most