As artificial intelligence systems such as ChatGPT and Midjourney have become increasingly prominent, so have concerns about the effects that such programs may have on the economy and society at large. With more businesses incorporating artificial intelligence (“AI”) into their operations, these apprehensions about its use become more salient every day. While the potential uses of AI for innovation, automation, and streamlining tasks is great, the algorithms powering AI are not free from the biases reflected in the data and content that they are fed, creating risks of violating civil rights and consumer protection laws.
This week, officials from four federal government agencies--the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ), Equal Employment Opportunity Commission (EEOC), and the Chair of the Federal Trade Commission (FTC), issued a joint statement regarding enforcement efforts aimed at addressing these concerns. The statement, available here, describes how these agencies’ enforcement authorities extend to AI programs, particularly as these programs may “perpetuate unlawful bias” and “automate unlawful discrimination.” In addition to explaining how bias in datasets powering AI can result in discriminatory outcomes, the statement details how lack of transparency in the design and inner workings of AI algorithms and improper use may also contribute to the risk of discrimination posed by the growth of such programs.
Beyond asserting the agencies’ powers to police AI and explaining the reasoning behind their concerns, the statement summarizes current enforcement steps each agency is taking:
- The CFPB and EEOC have each issued communications confirming the application of the laws they enforce to the use of AI. The CFPB has also clarified that use of AI in making credit decisions is not a defense to violations of those laws.
- With regards to the issue of discrimination in housing, earlier this year the DOJ filed a statement of interest in a Fair Housing Act lawsuit in federal court, explaining that the use of algorithms for screening potential tenants does not absolve landlords from liability if the use results in unlawful exclusion of protected individuals from housing opportunities.
- The FTC has taken action by issuing warnings to businesses that use of AI may violate the FTC Act if the use has discriminatory impacts or if AI is used without taking precautions to reduce the risk of bias. In addition to issuing a warning, the FTC has gone further by requiring the destruction of algorithms and apps that are created using data that should not have been collected, including data illegally collected from children. See https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive.
The statement ends with a reiteration of the agencies’ “resolve to monitor the development and use” of AI. As the statement is for informational purposes only, it has no legally binding effect, and does not create any new obligations. Still, it’s a clear indicator that these agencies will be keeping a close and scrutinizing eye on the use of AI, with likely guidance and regulations coming in the future.
About Data Points: Privacy & Data Security Blog
The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.