• 0 Posts
  • 239 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • The thrust of this article seems to be that the important thing is that automatic transcription services be compliant with unspecified “governance standards”. It goes on to give a generally glowing review of a specific medical transcription service:

    This software, Accurx Scribe, has been developed and deployed in line with all current NHS England requirements for AVT, and there is no suggestion this product breaches any rules, standards or guidance.

    Indeed, the company which developed it meets weekly with NHS England on creating a standardised approach to scale the benefits across the NHS.

    However their website seems to indicate that their privacy practices are garbage as transcriptions are implied to happen on company servers:

    At Accurx, our employees may need to see patient data that we store for for strictly limited purposes.

    This seems pretty absurd to me since the technology is at the point where effective on-device transcription is a reality. Why look at whether bureaucrats have rubber stamped something instead of looking at the actual commonsense properties of who has access to the data? That could easily be the doctor and no one else. The question of what constitutes good security and privacy isn’t even something this article wants to bring up for consideration.






  • To me the part that seems the most wasteful is wasting the time of badly paid workers when you could just prepare food for yourself instead, and to a lesser extent the waste of your money going to a corporation, and the waste of real estate. I don’t think things like water use and extra plastic/cardboard trash associated with food is all that impactful or worth worrying about. That said I personally avoid going out to eat except in the rare cases when it is an unavoidable social requirement, so a few times a year at most.


  • chicken@lemmy.dbzer0.comtoHumor@lemmy.worldThe "coin boys"
    link
    fedilink
    English
    arrow-up
    8
    ·
    19 days ago

    At my school there was a group of students who would steal unattended pencils and hoard them in a huge pile in a hole in the woods behind the school. Eventually they got caught but for a while it was easy to make the excuse that you couldn’t do schoolwork because the pencils were gone.



  • It’s not quite the same thing as deploying soldiers against protesters, but technically all of those things are done ultimately through the use of coercive and violent force. Don’t want to go to school? Your parents will make you, because if they don’t they could be imprisoned. Slightly inconvenience drivers by walking across a busy street not at a crosswalk? Could be fined or arrested for jaywalking. Pose a hazard to rocket launches by flying a makeshift aircraft in federal airspace with no flight plan? You know the drill. That’s not to mention the funding for all those things, the violence inherent in which doesn’t stop at taxes, but also is a central factor in maintaining the value of a currency in a variety of different ways.




  • But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.

    Yeah, but I don’t know where you’re getting the “never will” or “fundamentally cannot do” from. LLMs used to be only useful for coding if you ask for simple self-contained functions in the most popular languages, and now we’re here; most requests with small scope, I’m getting a result that is better written than I could have done myself by spending way more time, it makes way fewer mistakes than before and can often correct them. That’s with only using local models which became actually viable for me less than a year ago. So why won’t it keep going?

    From what I can tell there is not very much actually standing in the way of sensible holistic consideration of a larger problem or codebase here, just context size limits and being more likely to forget things in the context window the longer it is, which afaik are problems being actively worked on where there’s no reason they would be guaranteed to remain unsolved. This also seems to be what is holding back agentic AI from being actually useful. If that stuff gets cracked, I think it’s going to mean things will start changing even faster.


  • A few years ago I remember people being amazed that prompts like “Markiplier drinking a glass of milk” could give them some blobs that looked vaguely like the thing asked for occasionally. Now there is near photorealistic video output. Same kind of deal with ability to write correct computer code and answer questions. Most of the concrete predictions/bets people made along the lines of “AI will never be able to do ______” have been lost.

    What reason is there to think it’s not taking off, aside from bias or dislike of what’s happening? There are still flaws and limitations for what it can do, but I feel like you have to have your head in the sand to not acknowledge the crazy level of progress.