• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle



  • rho50@lemmy.nztoTechnology@beehaw.orgBut Claude said tumor!
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    8 months ago

    I don’t think it’s necessarily a bad thing that an AI got it wrong.

    I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

    There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.



  • Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.

    I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).


  • There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

    Funnily enough, those systems aren’t using language models 🙄

    (There is Google’s Med-PaLM, but I suspect it wasn’t very useful in practice, which is why we haven’t heard anything since the original announcement.)



  • I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.

    These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

    The misinformation is causing real harm.


  • I saw a job posting for Senior Software Engineer position at a large tech company (not Big Tech, but high profile and widely known) which required candidates to have “an excellent academic track record, including in high school.” A lot of these requirements feel deliberately arbitrary, and like an effort to thin the herd rather than filter for good candidates.










  • Looks like a very cool project, thanks for building it and sharing!

    Based on the formula you mentioned here, it sounds like an instance with one user who has posted at least one comment will have a maximum score of 1. Presumably the threshold would usually be set to greater than 1, to catch instances with lots of accounts that have never commented.

    This has given me another thought though: could spammers not just create one instance per spam account? If you own something like blah.xyz, you could in theory create ephemeral spam instances on subdomains and blast content out using those (e.g. [email protected], [email protected], etc.)

    Spam management on the Fediverse is sure to become an interesting issue. I wonder how practical the instance blocking approach will be - I think eventually we’ll need some kind of portable “user trustedness” score.