• RamenJunkie@midwest.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 days ago

    I am not a professional coder, just a hobbyist, but I am increasingly digging into Cybersecurity concepts.

    And even as an “amature Cybersecurity” person, everything about what you describe, and LLM coders, terrifies me, because that shit is never going to have any proper security methodology implemented.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 days ago

    To be fair, most never could. I’ve been hiring junior devs for decades now, and all the ones straight out of university barely had any coding skills .

    Its why I stopped looking at where they studied, I always first check their hobbies. if one of the hobbies is something nerdy and useless, tinkering with a raspberry or something, that indicates to me it’s someone who loves coding and probably is already reasonably good at it

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Nevermind how cybersecurity is a niche field that can vary by use case and environment.

      At some level, you’ll need to learn the security system of your company (or the lack there of) and the tools used by your department.

      There is no class you can take that’s going to give you more than broad theory.

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    4 days ago

    This isn’t a new thing. Dilution of “programmer” and “computer” education has been going on for a long time. Everyone with an IT certificate is an engineer th se days.

    For millennials, a “dev” was pretty much anyone with reasonable intelligence who wanted to write code - it is actually very easy to learn the basics and fake your way into it with no formal education. Now we are even moving on from that to where a “dev” is anyone who can use an AI. “Prompt Engineering.”

  • endeavor@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    Im in uni learning to code right now but since I’m a boomer i only spin up oligarch bots every once in a while to check for an issue that I would have to ask the teacher. It’s far more important for me to understand fundies than it is to get a working program. But that is only because ive gotten good at many other skills and realize that fundies are fundamental for a reason.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    I could have been a junior dev that could code. I learned to do it before ChatGPT. I just never got the job.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    The problem is not only the coding but the thinking. The AI revolution will give birth to a lot more people without critical thinking and problem solving capabilities.

    • OrekiWoof@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      apart from that, learning programming went from something one does out of calling, to something one does to get a job. The percentage of programmers that actually like coding is going down, so on average they’re going to be worse

      • mr_jaaay@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        This is true for all of IT. I love IT - I’ve been into computer for 30+ years. I run a small homelab, it’ll always be a hobby and a career. But yeah, for more and more people it’s just a job.

  • drathvedro@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 days ago

    This post is literally an ad for AI tools.

    No, thanks. Call me when they actually get good. As it stands, they only offer marginally better autocomplete.

    I should probably start collecting dumb AI suggestions and gaslighting answers to show the next time I encounter this topic…

      • drathvedro@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 days ago

        There are at least four links leading to AI tools in this page. Why would you link something when you complain about it?

        • SwordInStone@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          to play the devil’s advocate: this can be done to exemplify what you complain about as opposed to complaining about an abstract concept

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Oh lol I thought it was a text post, I didn’t even click the link and just read the post description.

          • datalowe@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 days ago

            The “about” page indicates that the author is a freelance frontend UI/UX dev, that’s recently switched to “helping developers get better with AI” (paraphrased). Nothing about credentials/education related to AI development, only some hobby projects using preexisting AI solutions from what I saw. The post itself doesn’t have any sources/links to research about junior devs either, it’s all anecdotes and personal opinion. Sure looks like an AI grifter trying to grab attention by ranting about AI, with some pretty lukewarm criticism.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 days ago

    I’ve said it before, but this is a 20-year-old problem.

    After Y2K, all those shops that over-porked on devs began shedding the most pricey ones; worse in ‘at will’ states.

    Who were those devs? Mentors. They shipped less code, closed fewer tickets, cost more, but their value wasn’t in tickets and code: it was investing in the next generation. And they had to go because #numbersGoUp

    And they left. And the first gen of devs with no mentorship joined and started their careers. No idea about edge cases, missing middles or memory management. No lint, no warnings, build and ship and fix the bugs as they come.

    And then another generation. And these were the true ‘lost boys’ of dev. C is dumb, C++ is dumb, perl is dumb, it’s all old, supply chain exploits don’t exist, I made it go so I’m done, fuck support, look at my numbers. It’s all low-attention span, baling wire and trophies because #numbersGoUp.

    And let’s be fair: they’re good at this game, the new way of working where it’s a fast finish, a head-pat, and someone else’s problem. That’s what the companies want, and that’s what they built.

    They say now that relying on Ai makes one never really exercise critical thought and problem-solving, and I see it when I’m forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember. But we’re seeing people do that with actual work; with go and rust code, and yeah, no concept of why we want to check for completeness let alone a concept of how.

    What do we do, though?

    If we’re in a position to do so, FAIL some code reviews on corner cases. Fail some reviews on ISO27002 and supply chain and role sep. Fail some deployments when they’re using dev tools in prod. And use them all as teachable moments. Honestly, some of them got no mentorship in college if they went, and no mentorship in their first ten years as a pro. It’s going to be hard getting over themselves, but the sooner they realise they still have a bunch to learn, the better we can rebuild coders. The hardest part will be weaning them off GPT for the cheats. I don’t have a solution for this.

    One day these new devs will proudly install a patch in the RTOS flashed into your heart monitor and that annoying beep will go away. Sleep tight.

    • SpicyLizards@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      I have seen this too much. My current gripe isn’t fresh devs, as long as they are teachable and care.

      My main pain over the last several years has been the bulk of ‘give-no-shit’ perms/contractors who don’t want to think or try when they can avoid it.

      They run a web of lies until it is no longer sustainable (or the project is done for contractors) and then again its someone else’s problem.

      There are plenty of 10/20 year plus and devs who don’t know what they are doing and don’t care whose problem it will be as long as it isnt theirs.

      I’m sick of writing coding 101 standards for 1k+ a day ‘experts’. More sick of PR feedback where it’s a battle to get things done in a maintainable manner from said ‘experts’.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible.

      And this is the perfect use case. There’s a good chance someone has done exactly what you want, and AI can regurgitate that for you.

      That’s not true of any interesting software project though.

      FAIL some code reviews on corner cases. Fail some reviews on ISO27002 and supply chain and role sep. Fail some deployments when they’re using dev tools in prod. And use them all as teachable moments.

      Fortunately, I work at an org that does this. It turns out that if our product breaks in prod, our customers could lose millions, which means they could go to a competitor. We build software to satisfy regulators, regulators that have the power to shut down everything if the ts aren’t crossed just so.

      Maybe that’s the problem, maybe the stakes are low enough that quality isn’t important anymore. Idk, what I do know is that I go hard on reviews.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      and I see it when I’m forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember.

      Feels like this is the attitude towards programming in general nowadays.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        To be fair, YAML sucks. It’s a config language that someone thought should cover everything, but excel at nothing.

        Just use TOML, JSON, or old-school INI. YAML will just give you an aneurism. Use the best tool for the job, which is often not the prettiest one.

        Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

        Antoine de Saint-Exupéry

        Kids these days with their fancy stuff, you don’t need all that to write good software. YAML is the quintessential “jack of all trades, master of none” nonsense. It’s a config file, just make it easy to parse and document how to edit it. That’s it.

    • red_bull_of_juarez@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 days ago

      While there is some truth to what you said, it sounded to me too much like “old man yells at clouds” because you are over-generalizing. Not everything new is bad. Don’t get stuck in the past, that’s just as dumb as relying on AI.

      • tomkatt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        You and I read a very different comment, apparently. There was nothing there saying new is bad. Maybe read it again.

  • froggycar360@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    I could barely code when I landed my job and now I’m a senior dev. It’s saying a plumber’s apprentice can’t plumb - you learn on the job.

      • froggycar360@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        That’s true, it can only get you so far. I’m sure we all started by Frankenstein-ing stack overflow answers together until we had to actually learn the “why”

      • Mr_Dr_Oink@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        100% agree.

        I dont think there is no place for AI as an aid to help you find the solution, but i dont think it’s going to help you learn if you just ask it for the answers.

        For example, yesterday, i was trying to find out why a policy map on a cisco switch wasn’t re-activating after my radius server came back up. Instead of throwing my map at the AI and asking whats wrong l, i asked it details about how a policy map is activated, and about what mechanism the switch uses to determine the status of the radius server and how a policy map can leverage that to kick into gear again.

        Ultimately, AI didn’t have the answer, but it put me on the right track, and i believe i solved the issue. It seems that the switch didnt count me adding the radius server to the running config as a server coming back alive but if i put in a fake server and instead altered the IP to a real server then the switch saw this as the server coming back alive and authentication started again.

        In fact, some of the info it gave me along the way was wrong. Like when it tried to give me cli commands that i already knew wouldn’t work because i was using the newer C3PL AAA commands, but it was mixing them up with the legacy commands and combining them together. Even after i told it that was a made-up command and why it wouldn’t work, it still tried to give me the command again later.

        So, i dont think it’s a good tool for producing actual work, but it can be a good tool to help us learn things if it is used that way. To ask “why” and “how” instead of “what.”

  • zerofk@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 days ago

    As someone who has interviewed candidates for developer jobs for over a decade: this sounds like “in my day everything was better”.

    Yes, there are plenty of candidates who can’t explain the piece of code they copied from Copilot. But guess what? A few years ago there were plenty of candidates who couldn’t explain the code they copied from StackOverflow. And before that, there were those who failed at the basic programming test we gave them.

    We don’t hire those people. We hire the ones who use the tools at their disposal and also show they understand what they’re doing. The tools change, the requirements do not.

    • uranibaba@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

      While the requirements never changed, the tools sure did and they made it a lot easier to not understand.

      • major_jellyfish@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.

        It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.

        If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

        And then you get people like OP, blaming the generation while if anything its them and their company to blame… for falling behind. Got to keep up folks. Our field moves fast.

        • uranibaba@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.

          All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.

          Have you actually found that to be the case in anything complex though?

          I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.

          If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

          I agree with you on that.

        • xavier666@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          My rule of thumb: Use ChatGPT for questions whos answer I already know.

          Otherwise it hallucinates and tries hard in convincing me of a wrong answer.

  • pls@lemmy.plaureano.nohost.me
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    Of course they don’t. Hiring junior devs for their hard skills is a dumb proposition. Hire for their soft skills, intellectual curiosity, and willingness to work hard and learn. There is no substitute for good training and experience.

  • Sparking@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    What are you guys working on where chatgpt can figure it out? Honestly, I haven’t been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.

    • Kng@feddit.rocks
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      I’m forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times… the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.

        • 0x0@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I would quit, immediately.

          Pay my bills. Thanks.
          I’ve been dusting off the CV, for multiple other reasons.

          • 9bananas@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            how surprising! /s

            but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.

            a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 days ago

      Same. It can generate credible-looking code, but I don’t find it very useful. Here’s what I’ve tried:

      • describe a function - takes longer to read the explanation than grok the code
      • generate tests - hallucinates arguments, doesn’t do proper boundary checks, etc
      • looking up docs - mostly useful to find search terms for the real docs

      The second was kind of useful since it provided the structure, but I still replaced 90% of it.

      I’m still messing with it, but beyond solving “blank page syndrome,” it’s not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.

      I’m really bad at explaining what I want, because by the time I can do that, it’s faster to just build it. That said, I’m a senior dev, so I’ve been around the block a bit.

    • Thorry84@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.

      It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I’m on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

      In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.

      I’m afraid for a future where forums and such don’t exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn’t know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.

      • Hoimo@ani.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

        The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.

        What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.

      • baltakatei@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        It’s like useful information grows as fruit from trees in a digital forest we call the Internet. However, the fruit spoils over time (becomes less relevant) and requires fertile soil (educated people being online) that can be eroded away (not investing in education or infrastructure) or paved over (intellectual property law). LLMs are like processed food created in factories that lack key characteristics of more nutritious fresh ingredients you can find at a farmer’s market. Sure, you can feed more people (provide faster answers to questions) by growing a monocrop (training your LLM on a handful of generous people who publish under Creative Commons licenses like CC BY-SA on Stack Overflow), but you also risk a plague destroying your industry like how the Panama disease fungus destroyed nearly all Gros Michel banana farming (companies firing those generous software developers who “waste time” by volunteering to communities like Stack Overflow and replacing them with LLMs).

        There’s some solar punk ethical fusion of LLMs and sustainable cultivation of high quality information, but we’re definitely not there yet.

        • Jayjader@jlai.lu
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          To extend your metaphor: be the squirrel in the digital forest. Compulsively bury acorns for others to find in time of need. Forget about most of the burial locations so that new trees are always sprouting and spreading. Do not get attached to a single trunk ; you are made to dance across the canopy.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 days ago

      When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).

      It’s also nice for writing C# unit tests.

      However, the times I’ve been stuck on my main languages, it’s been utterly useless.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.

        It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.

        • Sparking@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          Yeah, I’m not even that down on using LLMs to search through and organize text that it was trained on. But in it’s current iteration? It’s fancy stack overflow, but stack overflow runs on like 6 servers. I’ll be setting up some LLM stuff self hosted to play around with it, but I’m not ditching my brain’s ability to write software any time soon.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I’ve been using (mostly) Claude to help me write an application in a language I’m not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it’s very easy to mismanage memory the exact way Rust requires it.

      That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I’ve been learning more about Rust from having to correct code that’s been generated by an LLM.

      I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.

  • TsarVul@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    I’m a little defeatist about it. I saw with my own 3 eyes how a junior asked ChatGPT how to insert something into an std::unordered_map. I tell them about cppreference. The little shit tells me “Sorry unc, ChatGPT is objectively more efficient”. I almost blew a fucking gasket, mainly cuz I’m not that god damn old. I don’t care how much you try to convince me that LLMs are efficient, there is no shot they are more efficient than opening a static page with all the info you would ever need. Not even considering energy efficiency. Utility aside, the damage we have dealt to developing minds is irreversible. We have convinced them that thought is optional. This is gonna bite us in the ass. Hard.

    • Célia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I work at a software development school, and ChatGPT does a lot of damage here too. We try to teach that using it as a tool to help learning is different from using it as a “full project code generator”, but the speed advantages it provides makes it irresistible from many students’ perspective. I’ve lost many students last year because they couldn’t pass a simple code exam (think FizzBuzz difficulty level) because they had no access to internet, and had to code in Emacs. We also can’t block access to it because it starts an endless game where they always find a way to access it.

      • TsarVul@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Damn, I forgot about the teaching aspect of programming. Must be hard. I can’t blame students for taking shortcuts when they’re almost assuredly swamped with other classwork and sleep-deprived, but still. This is where my defeatist comment comes in, because I genuinely think LLMs are here to stay. Like autocomplete, but dumber. Just gotta have students recognize when ChatGPT hallucinates solutions, I guess.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      It’s going to get worse. I suspect that this’ll end with LLM taking the part of a production programs. Juniors just feeding it scenarios to follow, hook the thing up to a database and web page and let it run. It’ll gobble power like there’s no tomorrow and is just a nightmare to maintain, but goes live in a quarter if the time so every manager goes with that.

    • _g_be@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      How is it more efficient than reading a static page? The kids can’t read. They weren’t taught phonics, they were taught to guess the word with context clues. It’s called “whole language” or “balanced reading”

      • graphene@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        I don’t think phonics are the most critical part of why the kids can’t read.

        It’s proven that people who read primarily books and documents read thoroughly, line by line and with understanding, while those that primarily read from screens (such as social media) skip and skim to find certain keywords. This makes reading books (such as documentation) hard for those used to screens from a young age and some believe may be one of the driving forces behind the collapse in reading amongst young people.

        If you’re used to the skip & skim style of reading, you will often miss details, which makes finding a solution in a manual infinitely frustrating.

      • Gormadt@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Literacy rates are on a severe decline in the US, AI is only going to make that worse.

        Over half of Americans between 16 and 74 read below a 6th grade level (that’s below the expected reading level of an 11 year old!)

        • AntY@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          We have the same problem with literacy here in Sweden. It’s unnerving to think that these kids will need to become doctors, lawyers and police officers in the future.