• 1 Post
  • 19 Comments
Joined 8 months ago
cake
Cake day: February 4th, 2024

help-circle

  • Yeah. I’m thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that’s categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

    The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will “see” the source code, which is a bit like a “clean room” from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I’m not sure that line is so clear.

    I don’t think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft’s Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won’t hold up in court forever, but Microsoft’s lawyers seem to think it’s a bit more nuanced than “this output can’t be copyrighted”. If it’s not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I’m skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

    Again, I think it’s clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.


  • For example, if I ask it to produce python code for addition, which GPL’d library is it drawing from?

    I think it’s clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

    I’m not trying to be obtuse-- I’m an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to “store” data is a bit less clear than copy/pasting code wholesale.

    would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

    If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they’re using loss-y compression to “learn” the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published “forced forgetting” methods.

    Then, that raises a further question.

    If I as an academic researcher wanted to make a model that writes code using GPL’d training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

    I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.


  • I hate big tech too, but I’m not really sure how the GPL or MIT licenses (for example) would apply. LLMs don’t really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren’t really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.

    I’m not advocating for OpenAI by any means, but I’m genuinely skeptical that most copyleft licenses have any stake in this. There’s no static linking or source code distribution happening. Many basic algorithms don’t follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.

    If your code is on GitHub, it really doesn’t matter what license you provide in the repository – you’ve already agreed to allowing any user to “fork” it for any reason whatsoever.





  • My Lord. Type “Halliburton oil Ukraine” into Google maps and look at the god-damned oil field that’s owned by a US company. Or look at how the US has has record natural gas exports every year since 2014.

    Or look up how the US weapons weren’t given to Ukraine. They were sold and those loans must be paid back. Britain and Russia didn’t pay back their WW2 lend-lease debts until 2006.

    The US is making a killing on this conflict. Ukrainians are also dying for their nation, but two things can be true at the same time. Anyone who isn’t a moron can see that.









  • lol. nah, homey. it’s a 7 month old account from the reddit exodus. Ukraine discovered natural gas in Crimea in 2014, threatening to box out Russia from the EU energy market, which is why it’s an existential threat for Russia if Ukraine pivots west. Consequently, Russia invaded. Here is a Wikipedia article that discusses the Shell-backed exploration of the gas field off the coast of Crimea.

    https://en.m.wikipedia.org/wiki/Skifska_gas_field

    Who the fuck said anything about supporting Russia? Genocidal regimes focused on the extraction of fossile fuels are fucked-- whether you’re talking Saudi Arabia, The US, or Russia. For the love of god, please educate yourself about the hypocrisy of US foreign policy. I’m really sorry that you can’t see beyond the binary worldview you’ve hinted to here.

    If I was a pro-Russia bot, why would I post this on an article, I posted, mocking Iranian hackers?

    Do you not realize that Iran is selling drones to Russia to bomb Ukraine?

    Do you not realize that Iran only has said drone technology because they captured a US drone flying over Pakistan and reverse engineered it? Seems like if the ill-informed war of terror never happened, Iran wouldn’t have drones and Russia wouldnt be using them in Ukraine to murder civilians. The world is way more complicated than the black and white worldview you’ve alluded to.

    Fuck Putin, fuck the us department of defense and fuck the police (just for good measure). Is that clear enough for you?



  • Yeah. Agreed. I just thought the Iraq war would be a great example because it is now pretty universally recognized as a major fuck up in US policy that lead to the deaths of around a million people and the eventual rise of the Islamic State. I mean, if you look up civilian casualty numbers in Ukraine vs Iraq and account for the shorter timeline (at least, since the 2022 invasion of Ukraine), the US is worse than Russia by a wide margin. And let’s not talk about Gaza or how to the US funnelled arms and funds to right wing nationalist groups in the former Yugoslavia before bombing civilian infrastructure in Belgrade in the name of preventing the genocide that the US helped to inflame and escalate. Hell, the shells that were dropped on Sarajevo in 1996 by the Serbian militias were given/sold to them by the US (during the cold war) and then they also sold arms to National separatist groups that are themselves accused of acts of genocide.


  • Oh. Yeah. I work as a systems guy on AI related tasks, but I have a strong mathematical background (my degrees are all in math), so I understand the AI stuff well. I just find the management of clusters and reliability engineering to be far more interesting than getting a computer to hallucinate nonsense. Anyway, the systems people are always dunking on the AI people for not knowing the basics of software like using ssh, setting up a firewall, or using version control software. We say things like, “yeah, but remember that the AI guys are the users, so we have to make it idiot proof” and “what’s the difference between malware and a neural network? Not much, but one only runs on Nvidia”

    Previously, I was the platform engineer for a self driving car project owned by a very large vehicle OEM. I would never get in a Tesla using “full self driving” and neither would any of my colleagues then or now. By the way, that self driving project collapsed because a very capable car manufacturer that produces more vehicles in a day than Tesla produces in a year realized it would never work at the consumer level and we had the benefit of a half a million dollars worth of military grade localization equipment, while Tesla is trying to get by on webcams and magic.

    Hell, you can pull up countless, peer reviewed papers in the AI field that don’t have error bars or any statistical hypothesis testing at all, authored by labs at MIT or Facebook. It’s terrifyingly bad.

    tl;dr AI people are generally the least competent people I’ve met in my field-- which is AI.