https://mediabiasfactcheck.com/orinoco-tribune-bias-and-credibility/
Overall, we rate Orinoco Tribune as extreme left biased and questionable due to its consistent promotion of anti-imperialist, socialist, and Chavista viewpoints. We rate them low factually due to their strong ideological stance, selective sourcing, the promotion of propaganda, and conspiracy theories related to the West.
The Orinoco Tribune has a clear left-leaning bias. It consistently supports anti-imperialist and Chavista perspectives (those who supported Hugo Chavez). The publication critiques U.S. policies and mainstream media narratives about countries opposing U.S. influence. Articles frequently defend the Venezuelan government and criticize opposition movements and foreign intervention.
Articles and headlines often contain emotionally charged language opposed to the so-called far-right of Venezuela, like this Far Right Plots to Sabotage Venezuela’s Electrical System in Attempt to Disrupt the Electoral Process. The story is translated from another source and lacks hyperlinked sourcing to support its claims.
Maybe don’t consider a pro-Maduro propaganda rag as a legitimate source for a conflict he’s directly involved in.
Maduro is a man who ordered his country to block Signal, ordered it to block social media, and arrests, imprisons, and bans his political opposition. He has also expressed strong support for Russia’s invasion of Ukraine, meanwhile the citizens of his country have been starving for years under what is literally known as The Maduro Diet, and the middle class has vanished. He has long forfeit his right to the benefit of the doubt. He is a despot who has now repeatedly falsified election results after mismanaging the country for years, and calls his opposition fascists while being fascist. That the people overwhelmingly want him gone is not some hegemonic plot by the evil West, it’s the natural consequence of his actions.
You’re entirely correct, but in theory they can give it a pretty good go, it just requires a lot more computation, developer time, and non-LLM data structures than these companies are willing to spend money on. For any single query, they’d have to get dozens if not hundreds of separate responses from additional LLM instances spun up on the side, many of which would be customized for specific subjects, as well as specialty engines such as Wolfram Alpha for anything directly requiring math.
LLMs in such a system would be used only as modules in a handcrafted algorithm, modules which do exactly what they’re good at in a way that is useful. To give an example, if you pass a specific context to an LLM with the right format of instructions, and then ask it a yes-or-no question, even very small and lightweight models often give the same answer a human would. Like this, human-readable text can be converted into binary switches for an algorithmic state machine with thousands of branches of pre-written logic.
Not only would this probably use an even more insane amount of electricity than the current approach of “build a huge LLM and let it handle everything directly”, it would take much longer to generate responses to novel queries.