• 2 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle









  • IMO that’s way off base. People want change. They know they’re getting screwed, and the grifter is promising change. He’s lying and I think most people know that, but the fact that they’d take a convicted felon over what the DNC offered up is a crushing repudiation.

    Bernie would’ve mopped the floor with Trump, because he also offers change. Someone like Obama would’ve too, even though there was a paucity of actual change during his terms.

    We need to drag the DNC kicking and screaming off of the corporate dick it’s sucking, and get it left enough to offer real change, and people will vote for it in droves.




  • The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.

    We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.


  • Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.

    The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:

    • There doesn’t seem to be any reason to believe that there’s a ceiling on scaling up
    • LLM’s reasoning abilities improve with scale (notice that the example they use for kiwis they included the answers from o1-mini and llama3-8B, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)
    • Techniques such as RAG and Chain of Thought help immensely on many problems
    • Basic prompting techniques help, like “Make sure you evaluate the question to ignore extraneous information, and make sure it’s not a trick question”
    • LLMs are smart enough to use tools. They can go “Hey, this looks like a math problem, I’ll use a calculator”, just like a human would
    • There’s a lot of research happening very quickly here. For example, LLMs improve at math when you use a different tokenization method, because it changes how the model “sees” the problem

    Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.







  • m_f@midwest.socialOPtoMildly Infuriating@lemmy.worldGoogle now requires JavaScript
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    1 month ago

    Sometimes, yeah. My default is DDG, and I also use Kagi, but Google is still good at some stuff. Guess I’ll take the hit and just stop using it completely though. Kagi has been good enough, and also lets me search the fediverse for finding that dank meme I saw last week. Google used to be able to do that, but can’t shove as many ads in those queries I assume, so they dropped that ability.