Summary
Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”
The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
If these llms are trained on the garbage of the internet, why is everyone surprised they keep spitting out vitriol?
It’s like with all the other terrible ideas that we wrote about in sci-fi. The jokes about a general ai finding the internet and then deciding to nuke us all have been around for decades.
Then they fucking trained the llms on that very data.
We will deserve our fate. At least the assholes on the web who trained that shit will.
Garbage from the Internet for the Internet.
For gamers, by gamers.
GIGO