While LLMs have been used for… a lot, it seems like this use might be one where it’s not only reliable but it appears to outperform existing methods of image compression. Being able to cram more data into less space tends to lead to interesting developments, so I will be keeping my eye on this.

What do you guys think? Seem like it’s deserving of less hype than I’m giving it? What kind of security holes do you think this could open?

  • EdgeOfToday@lemm.ee
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    With a neural network, you wouldn’t be able to mathematically prove that the signal is perfectly recovered 100% of the time for all possible inputs. That is the case with PNG and FLAC. If you’re just listening to music and need a good compression ratio, then sure, it won’t be a big deal if a couple of bits are wrong. But that’s also why we have lossy compression. If the goal is to make signal degradation imperceptible to a human, then you could get a much better compression ratio using neural networks. If it’s truly critical that the signal isn’t corrupted, it would probably be better to just use the original method.

    • ZickZack@kbin.social
      link
      fedilink
      arrow-up
      27
      ·
      1 year ago

      That’s not what lossless data compression schemes do:
      In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
      For example, one of the simplest and now ancient algorithms LZW does the following:

      • Initialize the dictionary to contain all strings of length one.
      • Initialize the dictionary to contain all strings of length one.
      • Emit the dictionary index for W to output and remove W from the input.
      • Add W followed by the next symbol in the input to the dictionary.
      • repeat
        Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

      However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
      It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
      “Information” here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
      Logically this makes sense:
      Let’s say you have a system that measures earthquakes. As one would expect, most of the time, let’s say 99% of the time, you will see “no earthquake”, while in 1% of the cases you will observe “earthquake”.
      Since “no earthquake” is a lot more common, the information gain is relatively small (if I told you “the system said no earthquake”, you could have guessed that with 99% confidence: not very surprising).
      However if I tell you “there is an earthquake” this is much more important and therefore is worth more information.

      From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

      However, we can do even better:
      Instead of just considering individual characters, we can also add in character pairs!
      Of course, it doesn’t make sense to add in every possible character pair, but for some of them it makes a ton of sense:
      For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence “the” and save a ton of bits!
      To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
      We can still do that: just scan the text 600 times.
      With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
      But with 4 characters its impossible: you already have half a million combinations!
      In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

      So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
      We can try to predict it!
      This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
      The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
      Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
      The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

      The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it’s just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.

      • firenzeleon@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        That’s not what lossless data compression schemes do:

        Yes it is.

        You went into a lot of detail about how they do it, but it’s still what they do.

        • BFrizzleFoShizzle@lemmy.nz
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I think the main point they’re disagreeing with is this:

          you wouldn’t be able to mathematically prove that the signal is perfectly recovered 100% of the time for all possible inputs

          They explain why you don’t need 100% accuracy - most compression codecs would only use the network for a prediction, which doesn’t actually have to be correct. It just has to be “more likely to be correct” than existing algorithms.

          If you want to read up more on the context of these prediction functions, the general class of compression algorithms you’d use for this are called prediction wavelet codecs. FLAC and arguably PNG are both prediction wavelet codecs.

    • astraeus@programming.dev
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      Seems like another “hey, what if we used LLMs for this” scenarios. It might be more effective, but exactly how many more resources are being used to make it do the same work as current compression algorithms? Effective doesn’t mean efficient and I think for lossless applications efficient is truly more important.

      • Butterbee (She/Her)@beehaw.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        A LOT. You can barely run 13b parameter models on a 24gb gfx card and outputs are like a page or so of text. Translate that over to audio and it would have to be broken down into discrete chunks that the model could use as “prompts” to output a section of audio that fit into the models available output. It might compress better, but it would be exceedingly painful and slow to extract even on AI focused cards. And it would use OODLES of watts to get just a little bit better than flac.

        • abhibeckert@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          13b parameters works out to about 9GB. You need a bit more than that since it needs more than just the model in memory, but at 24GB I’d expect at least half of it to go unused. And memory doesn’t use much power at all by the way. LPDDR4 uses something like 0.3 watts while actively reading/writing to it.

          The actual computations use more, obviously, but GFX cards are not designed for this task and while they’re fast most of them are also horribly inefficient.

          I run 13b parameter models on my ultra portable laptop (which has a small battery, no active cooling (fanless) and no discrete GPU). It has 16GB of RAM not GPU memory - RAM, and I’m running a full operating system, web browsers, etc a the same time. Models like llama2, stable diffusion, etc get perfectly usable performance without using much battery at all (at a guess, single digit watts while performing the calculations).

          There is efficient hardware now and there will be even more efficient hardware in the future. My laptop definitely isn’t designed to run these models and on top of that the models aren’t designed to run on a laptop either. There’s plenty more optimisation work to be done in the years to come.

          • Butterbee (She/Her)@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Ok, it’s been a while since I tried running a language model so I might have been thinking of the 30b models that were showing up at the time. The point remains though that this thing they were running would be well beyond hardware generally available and completely impractical for realtime use. Like… why would you do all that when flac and png are good enough. It is far cheaper and uses less power to accommodate the slightly less compressed files.

    • person594@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      That isn’t really the case; while many neural network implementations make nondeterministic optimizations, floating point arithmetic is in principle entirely deterministic, and it isn’t too hard to get a neural network to run deterministically if needed. They are perfectly applicable for lossless compression, which is what is done in this article.