

Why do I even bother using RSS, if the Lemmings post it just as quickly?
So you can race to be the first person to make the Lemmy post and rake in those sweet upvotes
Alternate account for @[email protected]
Why do I even bother using RSS, if the Lemmings post it just as quickly?
So you can race to be the first person to make the Lemmy post and rake in those sweet upvotes
Man is making trade deals similar to a Civilization NPC
i was hoping for a smaller model, something in the 14B range… My computer won’t run any of these.
Also, a 2TB model. Jesus.
Rather than read PCGamer talk about Anthropic’s article you can just read it directly here. It’s a good read.
If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.
I’ve been using it and people are sleeping on it. It’s easily the best LLM on the market right now, even if you’re not using it for coding. Very good reasoning skills and it doesn’t have the issues other reasoning models do where they overthink or keep saying “but wait” and confusing its outputs.
New sci-fi horror enemy just dropped
Don’t forget OpenAI constantly saying “We had an AI SO POWERFUL that it can be very dangerous!!! And no you can’t see it yet.”
There is, AI bots are scraping the entire internet and overloading servers by downloading everything. That’s literally why every website is adding captchas and hiding behind Cloudflare now.
Edit: just so you know, downvoting me doesn’t change actual facts
It’s a necessary evil sadly. The internet is so overrun with bots that if there wasn’t a captcha or those cloudflare “click here to pass” buttons, the internet would implode.
With this, OpenAI is officially starting to crack. They’ve been promising a lot and not delivering, the only reason they would push out GPT4.5 even though it’s worse and more expensive than the competition is because the investors are starting to get mad.
Finally, a data breach that doesn’t include me. Good to know I dodged it.
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
If they can make a decent 2-in-1 in the $500 range, it would be massive. It doesn’t really need great specs, the major issue with these laptops are build quality and battery life.
in developing our reasoning models, we’ve optimized somewhat less for math and computer science competition problems, and instead shifted focus towards real-world tasks that better reflect how businesses actually use LLMs.
I was just about to say how useless these benchmarks are. Plenty of LLMs claim to be better than Claude and GPT4, but in real world use they’ve always been more reliable. Claude especially. Good to hear they’re not just chasing scores.
Uhhuh. “Feedback”, read: risk of class action lawsuits from everybody they tried stopping from reaching the support they paid for
So the answer, as always, is ban useless, power-sucking, unreliable, copyright-infringing AI.
That’s naive. It’s way too late for any of that. If some country decided to ban AI, all the engineers will just move somewhere else.
I understand it well. It’s still relevant to mention that you can run the distilled models on consumer hardware if you really care about privacy. 8GB+ VRAM isn’t crazy, especially if you have a ton of unified memory on macbooks or some Windows laptops releasing this year that have 64+GB unified memory. There are also websites re-hosting various versions of Deepseek like Huggingface hosting the 32B model which is good enough for most people.
Instead, the article is written like there is literally no way to use Deepseek privately, which is literally wrong.
Never use a pixelate/blur filter these days. If you want to hide something in a screenshot, use a black box.