The FTC Is Closing in on Runaway AI

Teenagers deserve to grow, develop, and experiment, says Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), a nonprofit advocacy group. They should be able to test or abandon ideas “while being free from the chilling effects of being watched or having information from their youth used against them later when they apply to college or apply for a job.” She called for the Federal Trade Commission (FTC) to make rules to protect the digital privacy of teens.

Hye Jung Han, the author of a Human Rights Watch report about education companies selling personal information to data brokers, wants a ban on personal data-fueled advertising to children. “Commercial interests and surveillance should never override a child’s best interests or their fundamental rights, because children are priceless, not products,” she said.

Han and Fitzgerald were among about 80 people who spoke at the first public forum run by the FTC to discuss whether it should adopt new rules to regulate personal data collection, and the AI fueled by that data.

The FTC is seeking the public’s help to answer questions about how to regulate commercial surveillance and AI. Among those questions is whether to extend the definition of discrimination beyond traditional measures like race, gender, or disability to include teenagers, rural communities, homeless people, or people who speak English as a second language.

The FTC is also considering whether to ban or limit certain practices, restrict the period of time companies can retain consumer data, or adopt measures previously subscribed by congressional lawmakers, like audits of automated decision-making systems to verify accuracy, reliability, and error rates.

Tracking people’s activity on the web is the foundation of the online economy, dating back to the introduction of cookies in the 1990s. Data brokers from obscure companies collect intimate details about people’s online activity and can make predictions about individuals, like their menstrual cycles, or how often they pray, as well as collecting biometric data like facial scans.

Cookies underpin online advertising and the business models of major companies like Facebook and Google, but today it’s common knowledge that data brokerages can do far more than advertise goods and services. Online tracking can bolster attempts to commit fraud, trick people into buying products or disclosing personal information, and even share location data with law enforcement agencies or foreign governments.

As an FTC document that proposes new rules puts it, that business model is “creating new forms and mechanisms of discrimination.”

Last month, the Federal Trade Commission voted 3-2 along party lines in favor of adopting an Advanced Notice of Proposed Rulemaking (ANPR) to consider drafting new rules to address unfair or deceptive forms of data collection or AI. What’s unclear is where they will draw the line.

The FTC is taking some data brokers to court, most recently Kochava, a company selling location data from places like abortion clinics and domestic violence survivor shelters that is suing the FTC, but those punishments come on a case-by-case basis. New rules can address systemic problems and tell businesses the kind of conduct that can lead to fines or land them in court.

The beginning of the rulemaking process marks the first major step to regulate AI by the commission since hiring staff dedicated to artificial intelligence a year ago. Any attempt to create new rules would require proving that an unfair or deceptive business practice is prevalent and meets a legal threshold for the definition of unfair. The FTC will accept public comment about the commercial surveillance and AI ANPR until October 21. Should the commission decide new rules are necessary, it will release a notice for proposed rulemaking and again allow for a public comment period before making those rules final.

Source

Author: showrunner