Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • HeartyOfGlass@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    19 hours ago

    My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.

    ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.

    It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.

  • AsyncTheYeen@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    23 hours ago

    People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity

    We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t

    Captalist democracy is not real democracy

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      20 hours ago

      This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.

  • psion1369@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.

  • Bytemeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 day ago

    I’d like to have laws that require AI companies to publicly list their sources/training materials.

    I’d like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as “AI”.

    I’d like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.

    My big problems with AI right now, are that we don’t know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I’d also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 hours ago

    I don’t have negative sentiments towards A.I. I have negative sentiments towards the uses it’s being put towards.

    There are places where A.I can be super exciting and useful; namely places where the ability to quickly and accurately process large amounts of data can be critically life saving, ie) air traffic control, language translation, emergency response preparedness, etc…

    But right now it’s being used to paint shitty pictures so that companies don’t have to pay actual artists.

    If I had a choice, I’d say no AI in the arts; save it for the data processing applications and leave the art to the humans.

  • calcopiritus@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.

    The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.

    Additionally, data usage consent for generative AI should be opt-in. Not opt-out.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Out of curiosity, how would you define a product for that purpose? It’s pretty easy to tweak a few weights slightly.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        23 hours ago

        You can make the limit per-company instead. With big fines if you make thousands of companies to get around the law.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          13 hours ago

          Ah, so we’re just brainstorming.

          It’s hard to nail down “no working around it” in a court of law. I’d recommend carbon taxes if you want to incentivise saving energy with policy. Cap and trade is also seen as a gold standard option.

          • calcopiritus@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            12 hours ago

            Carbon taxes still allow you to waste as much energy as you want. It just makes it more expensive. The objective is to put a limit on how much they are allowed to waste.

            I’m not a lawyer. I don’t know how to make a law without possible exploits, but i don’t think it would be hard for an actual lawyer to make a law with this spirit that is not easily avoided.

  • Pulptastic@midwest.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 day ago

    Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.

  • Retro_unlimited@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    I was pro AI in the past, but seeing the evil ways these companies use AI just disgusts me.

    They steal their training data, and they manipulate the algorithm to manipulate the users. It’s all around evil how the big companies use AI.

  • FuryMaker@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 hours ago

    Lately, I just wish it didn’t lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.

    If it doesn’t know something, it should just admit it.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      17 hours ago

      LLM don’t know that they are wrong. It just mimics how we talk, but there is no conscious choice behind the words used.

      It just tries to predict which word to use next, trained on a ungodly amount of data.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    1 day ago

    Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    1 day ago

    Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.

    If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.

    I just ask for the law to be applied to all equally. What a surprising concept…

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    Other people have some really good responses in here.

    I’m going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      edit-2
      1 day ago

      Nobody talks how it highlights the success of capitalism either.

      I live in SEA and AI is incredibly powerful here giving opportunity for anyone to learn. The net positive of this is incredible even if you think that copyright is good and intellectual property needs government protection. It’s just that lop sided of an argument.

      I think western social media is spoiled and angry and the wrong thing but fighting these people is entirely pointless because you can’t reason someone out of a position they didn’t reason themselves into. Big tech == bad, blah blah blah.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        1 day ago

        You don’t need AI for people to learn. I’m not sure what’s left of your point without that assertion.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          1 day ago

          You’re showing your ignorance if you think the whole world has access to fit education. And I say fit because there’s a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn’t understand that should really go out more and I’ll leave it at that.

          Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I’m not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.

          • jjjalljs@ttrpg.network
            link
            fedilink
            arrow-up
            12
            arrow-down
            2
            ·
            1 day ago

            You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine. It is not credible. Maybe if you’re just using it for translation into your native language? I’m not sure if it’s good at that.

            If you have access to the internet, there are many resources available that are more credible. Many of them free.

            • untakenusername@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine

              You trust tons of other uncertain probability-based systems though. Like the weather forecast, we all trust that, even though it ‘guesses’ the future weather with some other math

              • jjjalljs@ttrpg.network
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                That’s really not the same thing at all.

                For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don’t need a guessing machine to tell us that.

                • untakenusername@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 day ago

                  For things that require a definite, correct answer, an LLM just isn’t the best tool for it. However if the task is something with many correct answers, or no correct answer, like for instance writing computer code (if its rigorously checked against its actually not that bad) or for analyzing vast amounts of text quickly, then you could make the argument that its the right tool for the job.

            • Dr. Moose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              1 day ago

              Again you’re just showing your ignorance how actually available this is to people outside of your immediate circle, maybe you should travel a bit and open up your mind.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    My favorite one that I’ve heard is: “ban it”. This has a lot of problems… let’s say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.

    Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.

    If you want to see a parallel to this, check out Japan’s reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn’t sit still either. A temporary reprieve.

    The more aggressive of you will say, this is no problem, let’s push for a worldwide ban. Good luck with that. For almost any issue on Earth, I’m not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.

    AI is here. It’s like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.

    The question shouldn’t be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What’s the right balance? The genie isn’t going back in the bottle, no matter how hard you wish.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.

    The solution is easy and right there in front of everyone’s eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.

    This will literally solve every single problem people have other than energy use which is a fake problem to begin with.