- it’s not actually AI
- it’s just fancy auto complete/ glorified Markov chains
- it can’t reason it’s just a pLagIaRisM MaChiNe
Now if I want to win the annoying Lemmy bingo I just need to shill extra hard for more restrictive copyright law!
Doing the Lord’s work in the Devil’s basement
Now if I want to win the annoying Lemmy bingo I just need to shill extra hard for more restrictive copyright law!
Reasoning has nothing to do with knowledge though.
You should have asked chatgpt to explain the comment to you cause that’s not what they say
Yeah it always strikes me how religious extremism is framed. You rarely hear about christian extremists, who operate in the open on all social networks.
Yet, you could argue that Christian extremists have done more harm to western societies in the last 20 years than any Islamic group.
That’s a nice hypothetical but the facts of this case are much simpler. Would you agree that a country is sovereign, and entitled to write its own laws? Would you agree that a company has to abide by a country’s laws if it wants to operate there? Even an American company? Even if it is owned by a billionaire celebrity?
Then you have to agree that piracy is theft and people pirating content should be sued.
Even if you were extremely generous and didn’t factor in the scams in your analysis, the reality is that a Blockchain solves problems 99.9% of people will never face. This breaks the whole imagined model, when your product is ultra niche but relies on mass adoption for its security.
Even if you were extremely generous and didn’t factor in the scams in your analysis, the reality is that a Blockchain solves problems 99.9% of people will never face. This breaks the whole imagined model, when your product is ultra niche but relies on mass adoption for its security.
Just yesterday I got an ad for actual shrooms! The website even had a “how is this legal” section, but the legal theory in there was… Not very convincing…
Then these models are stupid
Yup that is kind of the point. They are math functions designed to approximate human tasks.
These models should start out with basics of language, so they don’t have to learn it from the ground up. That’s the next step. Right now they’re just well read idiots.
I’m not sure what you’re pointing at here. How they do it right now, simplified, is you have a small model designed to cut text into tokens (“knowledge of syllables”), which are fed into a larger model which turns tokens into semantic information (“knowledge of language”), which is fed to a ridiculously fat model which “accomplishes the task” (“knowledge of things”).
The first two models are small enough that they can be trained on the kind of data you describe, classic books, movie scripts etc… A couple hundred billion words maybe. But the last one requires orders of magnitude more data, in the trillions.
That’s what smaller models do, but it doesn’t yield great performance because there’s only so much stuff available. To get to gpt4 levels you need a lot more data, and to break the next glass ceiling you’ll need even more.
Pfff haha. Read the comment again. Slowly. You can do it !
I think we’re not talking about the same thing. Individual acts of terrorism are not significant in my view, the US gets a bunch of politically motivated shootings every year and it accomplishes absolutely nothing. They are horrible tragedies, but not political drivers.
I’m not Nostradamus but my guess is that if Trump were to suddenly up and die, his movement would fizzle out pretty quickly. His lackeys would fight for power Game of Thrones style, which would fragment the movement and make it essentially toothless. His fans would be agitated for a while, most wouldn’t do shit about it, a few would attempt shootings, even fewer would succeed and make headlines for a couple days. But nothing politically significant would happen. Just my $0.02 !
The attempted coup made sense because they were united under their leader. Even then, while it was shocking, it didn’t accomplish any of their immediate goals. Without Trump, my guess is the high level individuals that effectively coordinated it would be too busy fighting each other to accomplish anything significant.
The other examples are individuals committing criminal acts, not significant actions. Maybe you’d see a flare up of those, but probably not that much as crazy individualists get bored quick.
You’d think that but significant action required significant coordination. Coordinating these people without their guru would be like herding cats. Possibly, the leaders and influencers would tear each other apart, leading to mixed messaging, leading to apathy in the ranks.
Yeah there’s nothing I like more than my kid’s first drawings of me. I look like a big lovable bear it’s the greatest thing.
IMO it’s largely a consequence of the center-left and center-right (Hollande, Macron) completely abandoning the working class, and demonizing the left whilst cozying up to the far-right (mostly Macron, though Hollande definitely slid right over his term).
While i am no fan of Hollande and establishment socialism, I feel like he’s really the butt of the joke here. Whatever we do, we always seem to punch left.
He was president for 5 years, yeah it was limp-dicked as fuck and it veered right in mid-course, but if you remember, he was basically elected on a platform of not being Sarkozy. The people were so KOed by his mandate that Hollande’s whole angle was to be the “back to normal” president. And that’s a promise he kind of kept, if you look at his time, sandwiched between two hyper-mediatic hard-right presidents, well yeah it felt like the kind of politics our parents talked about. Not great politics, just normal not-sadistic politics.
It seems a lot of people believe there is a magical bag full of candidates that can potentially win general elections, but the Dems for some reason decided not to open it…
In French we use the term “diagonale du vide” (the empty diagonal)
No the article is badly worded. Earlier models already have reasoning skills with some rudimentary CoT, but they leaned more heavily into it for this model.
My guess is they didn’t train it on the 10 trillion words corpus (which is expensive and has diminishing returns) but rather a heavily curated RLHF dataset.