

plus - less staffers, less software. less software, less attack surfaces. they should lay off everybody, then you don’t need servers, then nobody can hack you!
plus - less staffers, less software. less software, less attack surfaces. they should lay off everybody, then you don’t need servers, then nobody can hack you!
Pedometers, you mean?
Well, true, but tyres wouldn’t make it a double distance, it’s not that simple. The case isn’t clear, if course, but the claim says that the odometer tried to reduce the range after it got out of the warranty period.
Not saying anything about the merit of the case, just the the claim itself sounds interesting and that if true, you can’t wave it away with “you changed tyres”.
Fucking hell, that site a million partners who all have “legitimate interest”. I’ve clicked on like a third of them and then gave up. I don’t need their shit.
Now that they have their own Putin, why not?
hunter2
it doesn’t look like *s to me
They have been making their own x86 knock-offs for a while now, but not at the same scale as the “regular” - i.e. they’d been doing it at 14nm or so, so less efficient.
I don’t know if they have better fab process since then, and for how big a scale.
You could just block most of the internet services - gmail, youtube, facebook etc under these rules, and then wreak havoc. I bet they’d roll back these laws in record time if someone pushed them to the limits :/
I mean, often enough even that phone call won’t help.
But you’re right, as long as everything is working normally, working on premises slows you down to do maintenance, updates etc etc. Cloud (of all kinds) takes that work away and you can work faster. And in the VC-driven daily and eternal grind, moving faster is the only thing that matters.
I think not many people are aware of that. No matter how well you build the systems with this type of AI, they don’t yet know. Now, maybe they’re useful, maybe not, but this awareness that everything is actually just made up, by statistics and such, is lacking from peoples minds.
Yes but if they do find a poor shmuck that wants the job, they can hope he’ll undervalue himself and ask for even less.
I will perhaps be nitpicking, but… not exactly, not always. People get their shit hacked all the time due to poor practices. And then those hacked things can send emails and texts and other spam all they want, and it’ll not be forged headers, so you still need spam filtering.
left-pad as a service.
It’s probably AI-supported slop.
(Not to be confused with our premium product, ParticleServices, which just shoot neutrinos around one by one.)
No, it’s just that it doesn’t know if it’s right or wrong.
How “AI” learns is they go through a text - say blog post - and turn it all into numbers. E.g. word “blog” is 5383825526283. Word “post” is 5611004646463. Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That’s why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to “train the model” - they need enough computer power, storage, memory to read the whole damn internet.)
So now how do the LLMs “understand”? They don’t, it’s just a bunch of numbers and statistics of which word (turned into that number, or “token” to be more precise) follows which other word.
So now. Why do they hallucinate?
How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.
They add in a tiny bit of randomness, they sometimes replace a “closer” match with a synonym or a less likely match, so they even seen real.
They add “weights” so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. “Tweaking the model”.
But there’s no knowledge as such, mostly it is statistics and dice rolling.
So the hallucination is not “wrong”, it’s just statisticaly likely that the words would follow based on your words.
Did that help?
You never review code when you have no time to do an actual review? Looks good to me :)
Gangaacrupt?
It’s 12, looks good to me.