⭒˚。⋆ 𓆑 ⋆。𖦹

  • 0 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • The latest We’re In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

    As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it’s always been kind of clear to me that AI would be more symbolic than connectionist. Of course it’s going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just “awaken” once a certain number of connections are made.

    Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as “black boxes” due to their lack of transparency and interpretability.

    Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

    Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can’t comprehend. We don’t have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.


    EDIT: In further response to the article itself, I’d like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.

    This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I’m not gonna source it, but we all know that a lot of people don’t want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.

    Even stripped of all reason, language can convey meaning and emotion. It’s why sad songs make you cry, it’s why propaganda and advertising work, and it’s why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It’s not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It’s hard coded into the language, they can’t be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.

    Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.





  • What kind of source is GazeOn? Based off the top menu items, looks like a pro-AI rag. Biased source.

    To give them an ounce of credit, there are many factors that would prevent any sort of accurate reporting on those numbers. To take that credit away, they confidently harp on their own poorly sourced number of 75.

    Whether AI is explicitly stated as the cause, or even effective at the job functions its attempting to replace is irrelevant. Businesses are plowing ahead with it and it is certainly resulting in job cuts, to say nothing of the interference its causing in the hiring process once you’re unemployed.

    We need to temper our fears of an AI driven world, but we also need to treat the very real and observable consequences of it as the threat that it is.


  • There are so many ways in which big tech is complicit with what’s happening in the US right now, but corporations have no home.

    Lack of regulations, cozying up with an authoritarian, and a populace still with significant funds to drain keep them safely within bounds while things like the GDPR keep them at bay in Europe. But rest assured, once things become too difficult/drained over here, they’ll start pushing the boundaries. Likely through grassroots campaigns to make Europeans distrust the GDPR (what is the general consensus on this anyways? as an American it looks pretty good to me but I’ve never lived under it).

    Big tech is a behemoth unto itself, and will need to be fought as such. Put up strong protections now while you can.



  • This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.

    LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I



  • The Safeways here in WA (at least in parts) have shifted from the old weight-based system(?) to some new AI/camera system. It gets upset if you move incorrectly in front of it because it thinks you may have bagged something you hadn’t scanned yet.

    Last time I went shopping I got stuck waiting for 5+ minutes when the machine flagged me and there wasn’t any available staff to review it with me. When the manager finally came over, we had to watch the video capture of me scanning (love the privacy invasion) and then she counted the items in my bag “just to make sure”. Afterwards she stood behind me and watched me finish scanning “in case it happens again”. Whatever. This feels neither efficient nor convenient. It feels like something else.




  • That’s my disclaimer that my research on the topic was less than exhaustive when I posted it at midnight, smartasscool guy. I then when on to offer a legitimate, if simple answer with sources that I linked. I see now the error of my ways in trying to provider a sincere answer to a question instead of posting the same tired dunk as everyone else.

    I have learned the error of my ways and will carry this lesson with me into the future as we build this Lemmy community.





  • It’s hard to pick what current AI application I hate the most, but music is right up there at the top.

    It’s absolutely ruined any sort of ambient/lo-fi/vaporwave/city pop mix on Youtube. And I think now it’s coming for dungeonsynth too, AUGH!

    Endless AI slop channels. You can tell it’s AI because they all have AI generated logos, overly intricate but garbled album art, no individual track names or citations, and most tellingly usually seem to be pretty consistently 1 or 2 hours exact. I’m guessing this is a sort of limitation of whatever software or paid subscription they’re using. You’ll also notice them upload a new album at impossibly prolific rates; if not daily then usually at least 2-3 times a week.

    Example: https://www.youtube.com/@ChillCityFM/videos

    Most of them admit to using AI tools if you poke around the descriptions, I think they’re obligated to if it weren’t already apparent enough.