• 0 Posts
  • 82 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • Even after Automattic acquired it, the site continued to lose money at a rate of $30 million each year, the company’s CEO Matt Mullenweg had said.

    I still wanna know what they’re spending all that money on, because I’m sure it’s not developers or even servers. The idea that they can only be profitable if they’re constantly growing their user numbers is an investor idea that’s doomed to fail eventually and why so many social media sites are crashing right now







  • Being entitled to equal rights doesn’t mean they actually get them. It also doesn’t account for the fact that many Palestinians are denied citizenship or remain in occupied territories controlled by Israel and explicitly not guaranteed equal rights

    The comprehensive report, Israel’s Apartheid against Palestinians: Cruel System of Domination and Crime against Humanity, sets out how massive seizures of Palestinian land and property, unlawful killings, forcible transfer, drastic movement restrictions, and the denial of nationality and citizenship to Palestinians are all components of a system which amounts to apartheid under international law. This system is maintained by violations which Amnesty International found to constitute apartheid as a crime against humanity, as defined in the Rome Statute and Apartheid Convention.

    source



  • But simply knowing the right words to say in response to a moral conundrum isn’t the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don’t respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

    This brings about worries that an AI might just be “convincingly bullshitting” about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM’s moral evaluations even if and when that AI hallucinates “inaccurate or unhelpful moral explanations and advice.”

    Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. “If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice,” they write.

    Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality