trevor (he/they)

Hello, tone-policing genocide-defender and/or carnist 👋

Instead of being mad about words, maybe you should think about why the words bother you more than the injustice they describe.

Have a day!

  • 0 Posts
  • 118 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle










  • That isn’t how this works. Herd immunity is extremely important for people who actually can’t safely be vaccinated, but more importantly, having large populations of anti-vaxxers will create ample opportunity for the viruses to mutate into forms that we don’t necessarily have vaccines for.

    We are all truly fucked unless we deal with these scum. It’s not just going to hurt the people causing the problem, and it will spread to the rest of the world.






  • Yes, because private property is theft. But unequal enforcement of copyright law is worse. Right now, LLMs are just lying machines trained on pirated data and the companies that run them are acting with impunity for doing something a normal person would get put in jail for.

    Copyright is immoral, but as long as it exists, the laws should be extra strict on companies that steal others’ works.


  • The relevant parts of the comment thread was about the claim that the model is open source. Below, you will find the subject of the comments bolded, for your better understanding of the conversation at hand:

    Deepseek is a Chinese AI company that released Deepseek R1, a direct competitor to ChatGPT.

    You forgot to mention that it’s open source.

    Is it actually open source, or are we using the fake definition of “open source AI” that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

    many more inane comments…

    And your most recent inane comment…

    That’s something they included, just like open source games include content. I would not say that the model itself (DeepSeek-V3) is open source, but the tech is. It is such an obvious point that I should not have to state it.

    Well, cool. No one ever claimed that “the tech” was not included or that parts of their process were open sourced. You answered a question that no one asked. The question was asking if the model itself is actually open source. No one has been able to substantiate the claim that the model is open source, which has made talking to you a giant waste of time.


  • They did not release the final model without the data

    They literally did exactly that. Show me the training data. If it has been provided under an open source license, then I’ll revise my statement.

    You literally cannot create a useful LLM without the training data. That is a part of the framework used to create the model, and they kept that proprietary. It is a part of the source. This is such an obvious point that I should not have to state it.



  • You can also fork proprietary code that is source available (depending on the specific terms of that particular proprietary license), but that doesn’t make it open source.

    Fair point about llama not having open weights though. So it’s not as proprietary as llama. It still shouldn’t be called open source if the training data that it needs to function is proprietary.