

Fair.
I’ve removed it, and I’m sorry.
Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.
Introverted, yet I enjoy discussion to a fault.
Fair.
I’ve removed it, and I’m sorry.
Logic is logic. There is no “advanced” logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.
We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn’t going to be changed by explaining the same thing slightly better.
Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.
Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I’m more inclined to believe there isn’t one, than assume that we are simply too “small-minded” to find an answer that is obvious to the hypothetical superintelligence.
But precision of thought orders of magnitude beyond our own.
This is just the “god doesn’t need to make sense to us, his thoughts are beyond our comprehension” -argument, again.
Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.
They don’t know, because we don’t tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.
You are using the fact that there are things we don’t understand, yet, as if it were proof that there are things we can’t understand, ever. Or eventually figure out on our own.
That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).
I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.
There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.
We’re incapable of even imagining how convincing of an argument a system like this could make.
Vaguely gestures at all of sci-fi, depicting the full spectrum of artificial sentience, from funny comedic-relief idiot, to literal god.
What exactly do you mean by that?
This is the same logic people apply to God being incomprehensible.
Are you suggesting that if such a thing can be built, its word should be gospel, even if it is impossible for us to understand the logic behind it?
I don’t subscribe to this. Logic is logic. You don’t need a new paradigm of mind to explore all conclusions that exist. If something cannot be explained and comprehended, transmitted from one sentient mind to another, then it didn’t make sense in the first place.
And you might bring up some of the stuff AI has done in material science as an example of it doing things human thinking cannot. But that’s not some new kind of thinking. Once the molecular or material structure was found, humans have been perfectly capable of comprehending it.
All it’s doing, is exploring the conclusions that exist, faster. And when it comes to societal challenges, I don’t think it’s going to find some win-win solution we just haven’t thought of. That’s a level of optimism I would consider insane.
Even if it is, I don’t see what it’s going to conclude that we haven’t already.
If we do build “the AI that will save us” it’s just going to tell us “in order to ensure your existence as a species, take care of the planet and each other” and I really, really, can’t picture a scenario where we actually listen.
OnePlus offloads heat to the charger
Some of it. They omit some circuitry that would have generated additional heat in the phone, and have it in the charger instead, but that doesn’t magically mean the battery itself wont generate the inevitable heat caused by being charged faster. The battery itself only accepts one voltage, so the only way to charge it faster is amps.
And my feeling is that they aren’t using the gains from this to make the batteries last, as SUPERVOOC is faster than pretty much every other standard. That makes me think they turned in any and all gains in battery health, for speed.
Most chargers send the additional energy via the cable in the form of extra voltage, because that doesn’t require a special cable. Turning that voltage into amps in the phone produces a little bit of extra heat, but that doesn’t mean that by eliminating that step, you get none from the battery itself as it charges. You can technically charge with a higher voltage, if you set up a phone such that it has more than one lithium cell. Some phones do this, but this doesn’t require the OnePlus approach of using a special charger that provides a higher current, since any fast charger that can do the usual higher voltage method of providing extra power will work.
Like you say. I’m curious how they test this. Even if one battery gets more cycles, it’ll degrade with time, as well. iPhones fast charge, too, but not with the chargers that used to come with the phones. You have to get one specifically for fast charging to get faster-than-normal charging.
Also, a tip. You may want to use something like AccuBattery to actually measure the state of the battery. Batteries, being chemical devices, have different capacities straight off the production line simply by virtue of not being chemically identically down to every molecule. (My Xperia 1 V unfortunately came with 93% design capacity, still within manufacturing tolerance, but the lowest I’ve seen on a new battery, it can be a bit of a lottery)
The built-in battery health monitor will just say “all good” until it isn’t. AccuBattery has allowed me to monitor every percentage of degradation over the lives of my last few phones.
Isn’t one plus one of the brands that has their own fast charging tech, that’s extra fast?
Makes total sense if they traded in longevity for speed.
Well, I’d start with physical buttons. Forget stuff like face ID. A button that scans your fingerprint is a lot simpler to “get”. Same goes for volume keys.
Automatic screen brightness is pretty good, but if it weren’t a thing, buttons would work there. That’s how laptops do it.
I’d add a feature that makes certain settings reset to “default” after a configurable amount of time (or never). Airplane mode or mute could turn off over night, so grandma can never “disable” her phone and become unreachable, or unable to reach anyone. (Except by turning it off, a concept almost no-one has to be taught)
Give me the ability to disable quick settings in the notifications shade, grandma doesn’t need to toggle nfc, wifi, her data connection, or start screen recording (I literally tried to remove all the quick settings, but there’s a minimum!). Hell, get rid of the notification shade completely and make it a physical button that just opens your messages from whatsapp, sms and email, all in one list.
I don’t think we need to dumb down everything a phone can do. And I think we can assume an elderly person can get help with changing settings or setting it up to begin with. As such, what I wish fir, is for the simple stuff to be even simpler, and for the complicated stuff to be hidden away and essentially have configurable child locks, so they can’t be touched, except by someone who knows what the stuff does.
It should be possibly to put a device in a mode where it is “senile-proof”. But it isn’t. My grandmother can, and has, put her devices in a state where they do not work, simply by turning on airplane mode without realizing. And our current solution is to use Life 360, so we can check that her phone is still “online” and have someone visit her to fix it, if it isn’t.
I’ve done it over phone many times. I have a system.
I have them read whatever is on screen until I figure out what they’re looking at.
Then I use one of my own devices to follow along, so I have an idea of what they’re seeing, so I can give extremely specific instructions.
Sounds ok.
But limiting. My grandma is still able to learn and think.
She currently uses a tablet and a phone. Android, set up by me, and locked down as much as possible.
One home screen, with the apps she wants on one half of the screen, and a widget that shows notifications on the other half. (Limited only to notifications from apps like whatsapp, etc., she doesn’t need see that the phone updated the OS during the night etc.)
This way, all I had to do, was tell her how the home button works, and how the back button works. No explaining quick settings or the notification shade.
From there, she’s slowly learned each app, always safe in knowing she can hit home/back if confused, and take it from the beginning.
The notification widget has been especially good, as it is always there showing her her messages, and she can tap them to go straight to replying.
It’s infuriating to me that all modern devices require extra steps, just to see messages you’ve received. The way a message would be shown on the lock screen and then be “gone” upon unlocking the screen was infinitely confusing to her.
I’ve never lost patience with my grandma like that. She’s old, a sweet person (most of the time) and perfectly intelligent if you let her be.
In fact when guiding her with tech, I hate the way she calls herself stupid and slow when she makes mistakes.
We just don’t make tech for old people the way we should. There are “accessible” phones but the ones I’ve had experience with are atrocious hackjobs with deal-breaking quirks, when the whole point is to be simple.
Nextcloud was my thought as well.
Calendar, file storage, and social features. All fairly integrated once you get it set up.
The hurdle would be that setup, getting the right apps installed to get the calendar and social features you want enabled.
And I’ve not really seen anything that can do calendars better than Nextcloud.
I’d be happy to advise on that, OP.
Yeah… I’m not gonna be asking the stuff I already found answers to via an internet search.
Uuh. That is exactly how games work.
And that’s completely normal. Every modern game has multiple versions of the same asset at various detail levels, all of which are used. And when you choose between “low, medium, high” that doesn’t mean there’s a giant pile of assets that go un-used. The game will use them all, rendering a different version of an asset depending on how close to something you are. The settings often just change how far away the game will render at the highest quality, before it starts to drop down to the lower LODs (level of detail).
That’s why the games aren’t much smaller on console, for exanple. They’re not including all the unnecessary assets for different graphics settings from PC. They are all part of how modern game work.
“Handling that in the code” would still involve storing it all somewhere after “generation”, same way shaders are better generated in advance, lest you get a stuttery mess.
And it isn’t how most game do things even today. Such code does not exist. Not yet at least. Human artists produce better results, and hence games ship with every version of every asset.
Finally automating this is what Unreals nanite system has only recently promised to do, but it has run into snags.
CSAM is against their terms of use. Afaik they remove it both using some automated systems, as well as manually.
Games can’t really compress their assets much.
Stuff like textures generally use a lossless bitmap format. The compression artefacts you get with lossy formats, while unnoticable to the human eye, can cause much more visible rendering artefacts once the game engine goes to calculate how light should interact with the material.
That’s not to say devs couldn’t be more efficient, but it does explain why games don’t really compress that well.
Aren’t a lot of the 2.5" ones already empty space?
How big, and how expensive, would a 3.5" SSD be, if it actually filled enough of the space with NAND chips for the form factor to be warranted?
Even in the most generous scenarios, cryopreservation of a person has to be done quickly, lest the brain decay.
Current methods cannot achieve it to begin with. Let alone once the person has been dead for days.
What do you mean “but”?
This doesn’t produce anything. It removes jobs instead of creating them. And by the end there is one less company in the system.
I wrote in response to you saying this is what they “should” be doing. That it would either work, or not.
But this is working sustainable businesses being butchered for their value on the meat market, rather than operated long term.
It most certainly isn’t what they “should” be doing.
If this is the best way to make money, the rich will continue to do it instead of starting new companies. That is not going to have pleasant long-term effects on the world.
Superpowered lying is already a thing, and all we needed was demographic data and context control.
Today, it is possible to get a population to believe almost anything. Show them the right argument, at the right time, in the right context, and they believe it. Facebook and google have scaled up exactly that into their main sources of revenue.
Same goes for attention hacking. AI generated content designed to hook viewers functions in entirely predictable, and fairly well understood ways. And the same goes for the algorithms which “recommend” additional content based on what someone is watching.
As for why doctors can’t do things AIs are pulling off, I’d suggest that’s because current systems are using indicators we don’t know about, which they aren’t sentient enough to explain. If they could, I have no doubt a human doctor, given enough time, could learn about, and detect, such indicators.
There is no evidence that what these models are doing, is “beyond our scale of thinking”.
But again, I do think the machine will be faster.
Current models display “emergent capabilities”, as in abilities we don’t know about before the model is created and tested. But once it is created, we can and have figured out what it is doing and how.