Lee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 10 months agoReddit's licensing deal means Google's AI can soon be trained on the best humanity has to offer — completely unhinged postswww.businessinsider.comexternal-linkmessage-square255fedilinkarrow-up11.04Karrow-down114
arrow-up11.02Karrow-down1external-linkReddit's licensing deal means Google's AI can soon be trained on the best humanity has to offer — completely unhinged postswww.businessinsider.comLee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 10 months agomessage-square255fedilink
minus-squareFaceDeer@kbin.sociallinkfedilinkarrow-up7arrow-down1·10 months agoNegative examples are just as useful to train on as positive ones.
minus-squareMelodiousFunk@startrek.websitelinkfedilinkEnglisharrow-up9arrow-down1·10 months agoThat’s what she said.
minus-squareRustmilian@lemmy.worldlinkfedilinkEnglisharrow-up7arrow-down1·edit-210 months agoThe AI is either going to be a horny, redpilled, schizophrenic & sociopathic, egomaniac that wants to kill everyone and everything or a devout, highly empathetic, Nun that believes in world peace and diversity.
minus-squarewise_pancake@lemmy.calinkfedilinkEnglisharrow-up1·10 months agoThey’ll tell it to polite, helpful, and always be racially diverse, so there’s no way it can be any of those things.
minus-squareRustmilian@lemmy.worldlinkfedilinkEnglisharrow-up4·edit-210 months agoThat heavily depends on how well they train it and that they don’t make any mistakes. Consider the true story of ChatGPT2.0.
minus-squarewise_pancake@lemmy.calinkfedilinkEnglisharrow-up2·10 months agoI’ll have to look at that later, that video sounds promising! I was just joking because the default prompts don’t magically remove bias or offensive content from the models.
minus-squareOpenStars@startrek.websitelinkfedilinkEnglisharrow-up2arrow-down2·10 months agopor que no los dos? Life…ah, finds a way.
Negative examples are just as useful to train on as positive ones.
That’s what she said.
The AI is either going to be a horny, redpilled, schizophrenic & sociopathic, egomaniac that wants to kill everyone and everything or a devout, highly empathetic, Nun that believes in world peace and diversity.
They’ll tell it to polite, helpful, and always be racially diverse, so there’s no way it can be any of those things.
That heavily depends on how well they train it and that they don’t make any mistakes.
Consider the true story of ChatGPT2.0.
I’ll have to look at that later, that video sounds promising!
I was just joking because the default prompts don’t magically remove bias or offensive content from the models.
por que no los dos?
Life…ah, finds a way.