You literally need physical access to the device to exploit it
You don’t need physical access. Read the article. The researcher used physical USB to discover that the Bluetooth firmware has backdoors. It doesn’t require physical access to exploit.
It’s Bluetooth that’s vulnerable.
He demonstrated that he is senile when he blamed the current trade agreements on the president who signed the last agreement. Trump signed the last agreement. He can’t remember.
That’s why he changes from tariff off to tariff on every week. He has no plan. He has lost his memory.
Which is weird because one of Rossman’s sources claimed that they were on the phone with Brother, asked how to do manual registration, and were told it couldn’t be done unless a genuine Brother toner cartridge was installed.
“training a new model”
Is equivalent to “make a new game” with better graphics.
I’ve already explained that analogy several times.
If people pay you for the existing model you have no reason to immediately train a better one.
When you throw more hardware at them, they are supposed to develop new abilities.
I don’t think you understand how it works at all.
Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.
If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.
I’m supposed to be able to take a model architecture from today, scale it up 100x and get an improvement.
You can make Crysis run at higher fps. You can add polygons. (remember ati clown feet?) You can add details to textures. https://research.nvidia.com/publication/2016-06_infinite-resolution-textures
But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
Which part of diminished returns not offering as much profit did you not understand?
Current models give MS an extra 30% revenue. If they spend billions on a new model will customer pay even more? How much would you pay more for a marginally better AI?
Current games have a limit. Current models have a limit. New games could scale until people don’t see a quality improvement. New models can scale until people don’t see a quality improvement.
Sorry I meant wdm. Multimode was for home installs.
Like games have diminished returns on better graphics (it’s already photo realistic few pay $2k on a GPU for more hairs?), AI has a plateau where it gives good enough answers that people will pay for the service.
If people are paying you money and the next level of performance is not appreciated by the general consumer, why spend billions that will take longer to recoup?
And again data centers aren’t just used for AI.
If buying a new video card made me money, yes
But this is the supposition that not buying a video card makes you the same money. You’re forecasting free performance upgrades so there’s no need to spend money now when you can wait and upgrade the hardware once software improvements stop.
And that’s assuming it has anything to do with AI but the long term macroeconomics of Trump destroying the economy so MS is putting off spending when businesses will be slowing down because of the tariff war.
More efficient hardware use should be amazing for AI since it allows you to scale even further.
If you can achieve scaling with software, you can delay current plans for expensive hardware. If a new driver came out that gave Nvidia 5090 performance to games with gtx1080 equivalent hardware would you still buy a new video card this year?
When all the Telcos scaled back on building fiber in 2000, that was because they didn’t have a positive outlook for the Internet?
Or when video game companies went bankrupt in the 1980’s, it was because video games were over as entertainment?
There’s a huge leap between not spending billions on new data centers ( which are used for more than just AI), and claiming that’s the reason AI is over.
Cancelling new data centers because deep seek has shown a more efficient path isn’t proof that AI is dead as the author claims.
Fiber buildouts were cancelled back in 2000 because multimode made existing fiber more efficient. The Internet investment bubble popped. That didn’t mean the Internet was dead.
You missed the part where deep seek uses a separate inference engine to take the LLM output and reason through it to see if it makes sense.
No it’s not perfect. But it isn’t just predicting text like how AI was a couple of years ago.
LLM’s can now generate answers. Watch this:
There’s camm2, the new standard for high speed removable memory. Asus already has released a motherboard that uses it and it matches the 8000 mts of the Framework which won’t be out until 3Q this year.
Framework chose non upgradable because it was easier/cheaper. That’s fine except Framework’s entire marketing has been built around upgradeable hardware.
The person above you confidently posted that it’s shipping now and got an up vote despite being wrong.
I never understand people who confidently post wrong things that are easily googlable.
Most miniPC vendors have already announced AI Max products:
https://www.gmktec.com/blog/gmktec-a-global-leader-in-ai-mini-pcs-unveils-the-amd-ryzen-ai-max-395
A PC lets you replace the CPU, ram and plug in multiple pcie cards.
This is less upgradable than the average laptop.
This is a standard a370 mini PC at a high price.
There’s Beelink, Minisforum, Aoostar and many others.
“Depending on how Bluetooth stacks handle HCI commands on the device, remote exploitation of the backdoor might be possible via malicious firmware or rogue Bluetooth connections.”
I of course don’t know details but I’m basing my post on that sentence. “Backdoor may be possible via … rogue Bluetooth connections.”