Summary

Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”

The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.

Fable apologized, disabled the feature, and removed other AI tools.

Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.