Shoulda let Sherman continue his march and torch the whole thing down. Reconstruction was a mistake.
Shoulda let Sherman continue his march and torch the whole thing down. Reconstruction was a mistake.
No, just a sociopath decided to fuck with this one person’s life.
It makes somewhat passable mediocrity, very quickly when directly used for such things. The stories it writes from the simplest of prompts is always shallow and full of cliche (and over-represented words like “delve”). To get it to write good prose basically requires breaking down writing, the activity, into its stream of constituent, tiny tasks and then treating the model like the machine it is. And this hack generalizes out to other tasks, too, including writing code. It isn’t alive. It isn’t even thinking. But if you treat these things as rigid robots getting specific work done, you can make then do real things. The problem is asking experts to do all of that labor to hyper segment the work and micromanage the robot. Doing that is actually more work than just asking the expert to do the task themselves. It is still a very rough tool. It will definitely not replace the intern, just yet. At least my interns submit code changes that compile.
Don’t worry, human toil isn’t going anywhere. All of this stuff is super new and still comparatively useless. Right now, the early adopters are mostly remixing what has worked reliably. We have yet to see truly novel applications yet. What you will see in the near future will be lots of “enhanced” products that you can talk to. Whether you want to or not. The human jobs lost to the first wave of AI automation will likely be in the call center. The important industries such as agriculture are already so hyper automated, it will take an enormous investment to close the 2% left. Many, many industries will be that way, even after AI. And for a slightly more cynical take: Human labor will never go away because having power over machines isn’t the same as having power over other humans. We won’t let computers make us all useless.
You’re aware Linux basically runs the
InternetWorld, right?
Billions of devices run Linux. It is an amazing feat!
You should stop. The wikimedia foundation has all the money it needs to fund wikipedia perpetually. The endowment was met years and years ago. Your money is being spent on parasitic non-profit management class nonsense things.
https://en.m.wikipedia.org/wiki/User:Guy_Macon/Wikipedia_has_Cancer
This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.
You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.
These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.
Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.
I, too, work in fintech. I agree with this analysis. That said, we currently have a large mishmash of regexes doing classification and they aren’t bulletproof. It would be useful to see about using something like a fine-tuned BERT model for doing classification for transactions that passed through the regex net without getting classified. And the PoC would be would be just context stuffing some examples for a few-shot prompt of an LLM and a constrained grammar (just the classification, plz). Because our finance generalists basically have to do this same process, and it would be nice to augment their productivity with a hint: “The computer thinks it might be this kinda transaction”
Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.
100% this. Recent update from jetbrains turned on the AI shitcomplete (I guess my org decided to pay for it). Not only is it slow af, but in trying it, I discovered that I have to fight the suggestions because they are just wrong. And what is terrible is I know my coworkers will definitely use it and I’ll be stuck fixing their low-skill shit that is now riddled with subtle AI shitcomplete. The tools are simply not ready, and anyone that tells you they are, do not have the skill or experience to back up their assertion.
This take is so naive. You really think the advertisers will give up their current, rich sources of data for Mozilla’s watered down crap? Given the current market share, no one is going to pay a premium for this little data. Or do you think the people that came up with everything creep.js does in order to track you will suddenly grow some ethics and stop doing that just because Mozilla is selling my data in aggregate? Not only is this a dumb idea that won’t even work (like just about every other non-browser thing they have tried), but then they also felt selling my data was within their right.
Mozilla Corp was never entitled to my data to sell in aggregate or to stay in for-profit business.
Computers to the rescue. AI succinctification:
Here’s a distilled version of the article:
Russian Psyops: Poisoning Online Communities
Russia has developed an effective online tactics game, using cheap and widespread methods to manipulate public opinion and sow discord. Their goal is to create an environment where no online space feels safe or trustworthy.
Tactics:
Consequences:
The Threat:
Russia’s online psyops campaign is a real and significant threat to global democracy and community cohesion. By recognizing this threat and taking steps to mitigate its effects, we can work towards preserving the integrity and safety of online spaces.