Fediverse is worse than Reddit. Mod abuse, admin abuse, disinformation, and people simping for literal terrorists.

  • 0 Posts
  • 65 Comments
Joined 11 months ago
cake
Cake day: January 3rd, 2024

help-circle




  • Oh, so you don’t want an AI government, but an AI voter. That’s probably even worse to be honest.

    Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed.

    Only if it was trained on me and only me personally. But that would make me what we in German describe as “Gläserner Mensch”, gläsern coming from Glas, as in being a transparent person, which is a metaphor used in privacy topics. I’d have to lay myself open to an immense amount of data hording, to create a robot that may or may not decide like I would decide. Aside from the terrible privacy violations & implications that this would entail for every single person, it would also just be a snapshot of current me. Humans change over time. Our experiences and our perception of the world around us forms and changes us, constantly, and with that our decision making.

    But coming back to the privacy issue… We already have huge problems on that front. Companies hoard massive amounts of user data, usually through very thin veiled consent through those little checkbox agreements, or they just do it illegal now when it comes to their LLMs where they tend to just scrape everything on the internet, regardless of consent or copyright infringements. I think the whole LLM topic is one that should go nowhere until we have a globally agreed framework of regulations on how we want to handle those and future technologies. If you make an LLM based on all the data on the internet, then such models should inherently be Free Open Source, including everything that they create. That’d be the only agreeable term in my book. Whether true AI in the future would even rely on data scraping is another topic though.


  • Advances? First we have to actually invent it. Text LLMs are just word prediction and generative models generally are neither intelligent nor have much room to grow at this point. And aside from that, every model is only as good as the training data it was trained on. If you train a model on smut and romance novels, then you have your perfect little eRP model for kinky chats, if you train your model on various programming languages then you have a good coding assistant, if you train your model on Reddit then you have an insufferable racist edgelord who wants to see the world burn. Point being, models are flawed in every sense of the meaning. All their word predictions end up going back to what humans have written in the past. All their word predictions have an inherent randomness to them due to how LLMs work, making them unreliable in their output, which includes even the best and largest models with access to the largest databanks & indexes out there. But the again, the biggest flaw is that they are not actually AI. They have no thoughts on their own, they don’t really evaluate things on various factors. They all just still follow their simple programming of mimicking language, without being aware of anything. If you want to have a computer like this run your politics then go right ahead, but you already have to ask yourself, what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it? Because ultimately it’s that person who is the decision maker. Politicians, for all their flaws, are still intelligent human beings that can be reasoned with. A computer can’t be really swayed, not in the classical sense. You can sway a chatbot easily, because they typically use your chat history as context for their own output. This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power. That’s why current models are sort of at their limit, sans some optimizations. You can’t just upscale them because their energy requirements grow exponentially faster than their actual text output.

    TLDR: “AI” is just overhyped corporate marketing of something that comes down to word prediction, fueled by sensationalist media scaremongering from people who don’t understand how LLMs work. Using them for decision making would just give power to the shadowy person who oversees the model and its flawed bias of its training data.