• trolololol@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    7 months ago

    If you have too many “slow” modes in a super computer you’ll hit a performance ceiling where everything is bottle necked by the speed of things that are not the CPU: memory, disk for swap, and network for sending partial results across nodes for further partial computing.

    Source: I’ve hang up too much around people doing PhD thesis in these kinds of problems.

    • BilboBargains@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I would imagine it’s very difficult to make a universal architecture but if I have learnt anything about computers it’s that the manufacturers of software and hardware deliberately created opaque and monolithic systems, e.g. phones. They cynically insert barriers to their reuse and redeployment. There’s no profit motive for corporations to make infintitely scalable computers. Short sighted greed is a much more plausible explanation.

      • trolololol@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        When you get to write and benchmark your own code you’ll see technology has limits and how they impact you.

        You can have as many raspberry pis as you want, and accomplish faster computation if you can use the same budget with Xeon on dozens of MB in cache and hundreds of gb in ram with gigabit network cards.

        10 years from now these Xeon will be like rpi compared to the best your money can buy.

        All of those things have to fit in a building, not a desk. The best super computers look like Google’s data centers, but their specific needs dictate several tweaks done by very smart people. Super computers are supposed to solve 1 problem with 1 set of data at a time, not 100 problems with 1000,000 data set/people profiles at a time which are much easier to partition and assign to only 1000th of your data center at a time.