Algorithmic Accountability from 10,000ft
Update! The Algorithmic Accountability Act was re-introduced September 21, 2023 by Senators Wyden and Booker and Representative Clarke along with 14 other co-sponsors.
In January of 2021, I moved to Washington, DC. It was a difficult week to come to this nation’s capital, but I am grateful I did. I spent the next year serving as a technology advisor to Senator Ron Wyden (D-OR) and learning about being a staffer in Congress through the TechCongress Congressional Innovation Fellowship. I had incredible mentors and champions both on and off the Hill, and I was able to work on many different important tech policy issues.
To apply to TechCongress, visit techcongress.io/apply. You can read about what motivated me to work in Congress and see my answers to the 2021 cohort application as a reference.
One of the things that I am most proud of having had the opportunity to work on is the Algorithmic Accountability Act of 2022, introduced in February of that year. The bill reflects the thinking and input from many, many brilliant people, but I’m glad to have been one of the key staffers crafting this text behind the scenes. The Algorithmic Accountability Act of 2022 had some really cool ideas articulated in it, but (as I experienced when I first started on the Hill back in January of 2021) legislation can be difficult to read for those who aren’t deeply familiar with it.
What follows is a cleaned up, edited, and expanded version of what was originally shared on Twitter (and later Mastodon).
Due to length, I’ve split it up into parts:
- Algorithmic Accountability from 10,000ft (that’s this!)
- Why I’m (still) hyped about the Algorithmic Accountability Act
- How to get into AI policy (coming soon)
Let’s get into it!
For starters: the Algorithmic Accountability Act of 2022 is a bill “to direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.”
But what does that mean?
Let’s back up a bit. The Algorithmic Accountability Act of 2022 is a piece of legislation that was introduced in the 117th Congress of the United States. It is a bill, which is a document that has a bunch of legal-sounding language that was written and submitted for Congress to consider as something to turn into a law. The 2022 Algorithmic Accountability Act is actually a revision (an update) and reintroduction (a re-submission for consideration) of an earlier bill originally introduced in 2019, which, itself, was an independent introduction of some text that was originally included as a piece of a different 2019 bill called the Mind Your Own Business Act, which was also revised and reintroduced earlier in 2021).
You might be noticing a pattern.
It’s pretty common for US federal bills to be revised, reintroduced, remixed, and otherwise Frankensteined into different versions as people make edits, incorporate feedback, and even change office. The Algorithmic Accountability Act underwent some pretty significant updates from 2019 to 2022 and ultimately, got quite a bit longer than its predecessor. This was necessary to clarify definitions, explain processes, reflect best practices, and decrease ambiguity. Personally, I now have a much greater understanding for why lawyers are Like That and a greater appreciation for specificity and caring where the comma goes! There can be very good reasons for legal text to be really long and wordy.
So what does “to direct the Federal Trade Commission, etc etc” actually mean?
Here’s the tl;dr: The Algorithmic Accountability Act of 2022 says that the US Federal Trade Commission (FTC), one of the Federal agencies that—as part of its mission to protect consumers—regulates how companies behave, needs to create and then enforce requirements for companies to assess the impacts of “augmented critical decision processes.”. Here’s a one-pager summarizing it, as well.
That was already a lot, so we’re going to break it down further.
An “augmented critical decision process” is a process where an “automated decision system” is used to make a “critical decision.”
What are “automated decision systems,” you say? Here’s exactly what it says in Section 2(2) (or “§2(2)” if you wanna be fancy):
The term “automated decision system” means any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.
In essence, these are computational systems, and in this bill, they are pretty broadly defined. This reflects research from experts like Rashida Richardson recognizing both that:
- Technology evolves and definitions need to be robust against the rapid rate of change
AND - Many harmful systems are… kinda boring!
While new innovations in AI and machine learning with deep neural nets are dazzling (and sometimes terrifying!), a lot of the automation that is taking place across society is not particularly technologically advanced. Even so, automated technologies have the power to scale benefits and harms to millions of people. (This is especially true when they are used to make “critical decisions” about people’s lives!) So—even though it’s often thought of as an AI bill—the Algorithmic Accountability Act of 2022 doesn’t specifically focus on AI or particular automation techniques.
Okay, so we have a definition for an automated decision system, now what is a “critical decision?” Critical decisions are decisions relating to consumers’ access to or the cost, terms, or availability of education & vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, or legal services. (We will dig into this more in Why I’m (still) hyped about the Algorithmic Accountability Act of 2022, but you might notice that there are parallels in this language to the EU AI Act’s 2021 “Annex III: High-risk AI Systems Referred To In Article 6(2)”)
So that’s what the bill says it’s about. Put all together: it’s about telling the FTC to create and then enforce requirements for companies to assess the impacts of using computational systems, the results of which serve as (or are intended to serve as) a basis for a decision or judgment about the cost, terms, or availability of a bunch of critical stuff in people’s lives like education, employment, healthcare, and housing.
That’s quite a mouthful, which is why legislative texts often define a bunch of terms to serve as a shorthand (kind of like creating variables in computer code).
In part 2 on why I’m (still) hyped, I’ll break down some of the things that I personally find most exciting, but if you want to get more context you can read the section-by-section summary of the bill for more info (or you can even read the full text if you’re into that) along with other resources linked at the bottom of this press release.
Read more:
- Why I’m (still) hyped about the Algorithmic Accountability Act
- How to get into AI policy (coming soon)