For me it’s because I’m not convinced LLMs are really a stepping stone to any actual AI. They don’t have educational applications imo because there isn’t any way they can separate truth from fiction. They don’t understand the words that they output; they’re just predictive text generators on a huge scale. This isn’t something that can change with better tech either; it’s baked in to the very concept of an LLM. And worse, when they are wrong there’s no way to tell without already knowing the answer to the questions you’re asking. They’re literally just monkeys with typewriters. This is an extremely good article about the kinds of problems I’m taking about.
Except for they didn’t avoid it. They knew it was happening weeks in advance and didn’t do shit to stop it. Just because franchise owners have a bit of autonomy does NOT mean corporate won’t bring the hammer down if they think something will tarnish the brand.