I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    6 days ago

    We have tools to support AI deployment, are encouraged to use a paid api, and intergration to the office tools.

    Thats it. No expectation that it a new god we are awaking like the OpenAI cultists push. No expectation that our jobs can be replaced by any of even the greatest models yet. Just quick low stake summeries, better autocomplete for code, and easy listening TTS of meetings notes if we missed them.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    109
    arrow-down
    3
    ·
    edit-2
    8 days ago

    Not in tech, but LLMs have been great for my safety and compliance consulting business. I can honestly say LLMs have made me thousands of euros.

    Before LLMs, I would spend quite a bit of my regular workday on creating safety plans and coming up with systems to improve conditions and ensure compliance.

    Now, with the power of LLMs, management can generate those plans themselves. So instead of me spending my normal workday on it, I get to bill my emergency rate when the hallucinated slop gets rejected and they need something actually legal at the last minute.

    • gazter@aussie.zone
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      I sometimes have to get involved with writing safety protocols. Not my favourite task, but I’ve always been super nervous about using AI to assist because it’s such a specific, rigid and important thing, that needs to be expressed as simply as possible, all of which AI is bad at. Care to share how you use it?

      • DaPorkchop_ [any]@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        They don’t, they said their thing is charging emergency rates to bail out other idiots who do use it and trust the output blindly.

        • gazter@aussie.zone
          link
          fedilink
          arrow-up
          2
          ·
          7 days ago

          That’s on me for not reading. Thanks. I gotta learn that pre coffee commenting should be double checked.

  • ExtremeDullard@piefed.social
    link
    fedilink
    English
    arrow-up
    102
    ·
    edit-2
    8 days ago

    My company is approaching AI like it’s been approaching anything for the past 40 years: with extreme caution. It’s coming alright, but the engineers are carefully evaluating it for coding, and it certainly isn’t being rolled out recklessly.

    I’m one of several die-hards who flat-out refuse to use it - not so much because it’s AI, but because it’s provided by an American company - and my choice is respected. Our CEO sees old-timers like me as the fallback is AI ends up shitting the company’s bed.

    • Logi@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      8 days ago

      Have you checked if Minstrel Mistral can generate code? When I’m back at keyboard I’m going to see if it has, an intellij plug in.

      Edit: Yes

  • starlinguk@lemmy.world
    link
    fedilink
    arrow-up
    49
    ·
    edit-2
    8 days ago

    I work at a renowned tech company that frequently reminds its employees that AI hallucinates. We do a lot of work for the army, a mistake caused by hallucinating AI would be a disaster.

    • EvilBit@lemmy.world
      link
      fedilink
      arrow-up
      24
      ·
      8 days ago

      Meanwhile we’re just waiting until Hegseth accidentally turns a Bethesda-area Target into a smoking crater because he was drunk-Grokking and fucks up ordering an airstrike to cheer himself up after the mainstream librul media hurt his fee-fees.

  • kersploosh@sh.itjust.works
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    8 days ago

    Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. Nobody is being encouraged to use AI models. At the end of the day, all work committed to a project is done by real humans with the normal review processes.

    Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.

    • mnemonicmonkeys@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      Honestly, this is probably the best use case for LLM’s.

      Tom Scott did something recent 2-3 years ago where he fed a bunch of his video titles into an LLM and had it come up 100 new names with a similar style. Most of the output sucked, a handful he had already done, and a few more sounded plausible but didn’t exist. But he got 8-10 that he could have turned into actual videos (doing all the work himself) and even did so for a couple.

      The hallucination of AI can be used to help a human artist or programmer, designer, scientist, etc.) make a new connection they couldn’t before, and they can then use that new connection to implement their new idea. But LLM’s generally suck for anything more than that, and over-reliance on them slowly erodes people’s ability to think and create over time

  • jtrek@startrek.website
    link
    fedilink
    arrow-up
    18
    ·
    8 days ago

    Work in a big multi national company. not a software company, but I’m on an engineering team.

    Leadership makes a lot of noises about AI.

    The engineers can’t even use git competently. I’ve suggested quietly maybe we should focus on learning software fundamentals instead of chasing dreams but no one here listens to me.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Our company leaders wanted a way to track the ai vibe coded apps…

      I run the company git server. They decided to have someone vibe code a tracker instead. And everything needs to be manually put in, and a bunch can’t be changed unless you muck with the database directly.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    8 days ago

    I just use AI to fill in the stupid forms HR make us do and don’t verify its output because I don’t respect it. Kills 2 birds with 1 stone.

    • apftwb@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      8 days ago

      Please God, give me an AI agent that can watch the video and do quiz for the yearly mandatory HR training

      • NannerBanner@literature.cafe
        link
        fedilink
        arrow-up
        4
        ·
        8 days ago

        My company has started using AI voices/figures in the videos. Like they weren’t bad enough already…

        AI watching AI to AI some slop to satisfy the AI the HR is using. Ugh.

        • mnemonicmonkeys@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          My company has some mandatory training videos they redid with AI. I don’t get it, none of the actual content was any different from last year’s video. They literally paid someone to redo the video with AI instead of just reuse the previous video.

          It’s kinda the same thing as Coke’s AI Christmas commercials this past year. They could have run their old, classic commercials like Hershey’s kisses does every year. Instead they paid to make new commercials with and pissed a bunch of people off

          • NannerBanner@literature.cafe
            link
            fedilink
            arrow-up
            1
            ·
            7 days ago

            I think in my case there may have been some royalties or appearance fees that they could avoid? I just noticed it after a few seconds of the little person breathing weirdly compared to the speech patterns and mouth movement, and then could barely focus on the actual information presented (not that there was much; I could have read a transcript of the bloody thing in a tenth of the time and retained the info better).

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 days ago

    I worked at one that actually wasn’t too bad except we had a peer review system for client reports and I was horrified to see how many people had such poor english grammatical understanding that they just assumed the AI was always the correct and better output than human.

    And I don’t mean people whose second language was english, I mean native english speakers were giving me AI feedback to change sentences that would completely change the context or horribly maim phrases into past tense where tense of the subject was very much important.

    I could easily ignore the changes from coworkers, but a handful of managers would then give performance feedback telling me to utilize AI and grammarly to improve my report quality, even though all of their report feedback was utter garbage lol.

    On a related note, grammarly can also go screw itself. That joke of a software suite still doesn’t hold a candle to Word 2007’s editor.

    • Crozekiel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      I fucking hate grammarly. And the modern Outlook webmail suggestions can go eat a back of dicks as well.

  • Bayta@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    8 days ago

    I run a small (5-employees) tech firm. We ignored AI for the first couple of years. Last year we started paying the basic Cursor subscription for our employees. We encouraged them to try it out a bit for a couple of weeks however they saw fit to evaluate if they found it useful for their workflows but we said we didn’t mind at all if they ended up deciding to adopt it long term or not. We also stressed we would continue reviewing code the same way so they would have to take responsibility for reviewing the AI’s output. I started as the only coder in the company and I review every PR so I am extremely familiar with all our codebase and I haven’t found it very useful personally but the people that joined more recently say it can be useful to point them towards parts of the code they are not familiar with yet. Right now each one uses it as a tool freely however they prefer and I don’t usually ask them about it, same way I don’t ask how often they use the “find and replace” function in VS Code.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      8 days ago

      That could potentially backfire on you:

      https://sciactive.com/human-contribution-policy/#Reasoning

      1. You could be including copyrighted code and not complying with its license.
      2. You don’t own the copyrights to AI generated code.
      3. The bugs and vulnerabilities AIs introduce are much harder to spot than in human authored code.
      4. Your team might not understand the code that they’re submitting.

      Etc.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        8 days ago

        Good luck proving that any given snippet was written by AI. That sounds like a total mess.

  • taiyang@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    8 days ago

    My wife’s at a major video game company that, oddly enough, hasn’t gone crazy over AI. Since she’s in localization, she uses DeepL which has some machine learning, but not really an LLM and LLMs aren’t really being pushed on her since it’s a downgrade. From what I can tell, their dev team is also just keeping things human made, although they’re in Japan so that might contribute.

    They aren’t saints, they did try to union bust a few years back, but their stance on AI, as well as creativity first mentality and recent pay raise guarantees and whatnot, kinda show they’re paying attention.

  • 🌞 Alexander Daychilde 🌞@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 days ago

    I’m too old for this shit - too old for the original show, I mean, but for some reason, my brain wants to make that title work:

    Who works at a (tech) company that’s not delirious about AI?

    SPONGE! BOB! SQUARE! PANTS!

    It completely doesn’t work.

      • Widdershins@lemmy.world
        cake
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        I haven’t seen the whole show but I have been under the impression that SpongeBob and intelligence don’t cross paths very often.

      • 🌞 Alexander Daychilde 🌞@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Well, you put way more into it than I had, so I feel I have a refinement to give back as thanks - it just needs a single extra syllable. Perhaps:

        Who works for a place that just licks AI’s taint

        Now it scans. :)

  • neidu3@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 days ago

    Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.

    However, the industry doesn’t shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don’t know much about it, but I get the impression it’s used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I’ve never quite understood - I just make sure the production servers offshore work.

    • leoj@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      seems like large scale data analysis and mathematics are the strong points of AI if I understand the tools correctly, less ambiguity and room for hallucinations.

      Do people agree?

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        14
        ·
        8 days ago

        “Artificial Intelligence” is a very broad term that, within computer science, covers a range of techniques and tools that broadly cover the study of “human-like behavior and impersonation.” Before the current fad of calling LLMs “AI”, the term was most often used in video games and covered techniques for pathfinding, decision making, reacting, seeming to speak, etc. Before that, pre-90s basically, “AI” had already undergone a few boom and bust cycles of hype with chess playing machines and, as always, chat bots.

        In many fields, many of these same techniques and their descendants are being used to model and simulate and predict. All of them have trade-offs and limitations, that’s what computer science is all about.

        • leoj@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I do remember talking to chatbots on AIM back in the day, so I think I had a leg up on other people in already understanding that the technology has existed for decades, which made me more cautious about the claims.

          • chunes@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            They made such a big leap so quickly, though. I remember even in 2018 thinking no bot would ever pass the Turing test.

            • leoj@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              Great point, they have come far, but my interactions have led me to believe they have come super far in faking it, not in actually understanding what is being done.

              Maybe they have come further then I realize, but based on how easily they get tripped up on simple things and tie themselves into knots, the general models haven’t come too much further since.

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 days ago

        Yeah, I think so. When you have low signal to noise, especially if the dataset is huge, AI tools seem pretty great.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      For the size of data that oil exploration requires, tapes make lots of sense still.

      They have higher density, and they are more shock proof. When you need to move masses of data round the world, writing it to tape, then sticking it on a plane is still the fastest way to move it (probably, may have changed I guess)

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 days ago

        Yup, I 100% agree. Tapes are often viewed as obsolete, but there is no more cost-effective way of storing data in the petabytes in a safe way than tape.

        Hell, at work I have a few live storage clusters measured in petabytes, and being responsible for them can be pretty stressful at times. Data loss isn’t just bad, it is fucking terrifying when its data costs hundreds of thousands of dollars per day to collect.

        I have yet to experience data loss, but I breathe a sigh of relief for every batch of data that has been confirmed written to tape. Because once it is, I know that it is safe and no longer my responsibility.

        It’s written to two sets of tape at a time, both of which are read back to confirm data integrity, and once it is, that’s when I know that my live copy is officially not supposed to be a backup.

        One set of tapes is stored on board in case something stupid happens with the other set during transport to a literal mountain for storage. There it is re-read and checksummed, confirming that the other set of tapes can be rewritten with the next dataset. (Yes, every tape cartridge is written to twice).