• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: February 19th, 2025

help-circle
  • Ah, I found the official answer to my question in the definitions (definition 9):

    “OPERATING SYSTEM PROVIDER” MEANS A PERSON THAT DEVELOPS, LICENSES, OR CONTROLS THE OPERATING SYSTEM SOFTWARE ON A DEVICE.

    This still leaves room for ambiguity, though, especially when it comes to Linux: is the OSP the person who installs the OS (e.g. a sysadmin)? They control the operating system on that device. Or are they the individual/organization that deems what software counts as a given operating system (e.g. Microsoft or Linus)? They develop and license the operating system that happens to be on a given device. Maybe it’s both, but the context suggests the latter more strongly to me.


  • Sorry for the stupid question, but what would an “operating system provider” mean here? Does that mean “the organization that builds and distributes the operating system”? If so, Linux is sort of screwed in CO; even The Linux Foundation can’t act for Linux the same way Apple or Microsoft can for macOS or Windows respectively. Maybe Red Hat could, but only for their flagship distro RHEL, and the E stands for Enterprise, lest we forget.

    If “operating system provider” were interpreted to mean “system administrator”, however (which is a stretch, but still), that might be a decent solution, since it has the effect of age-limiting content in an enforceable way, but keeps identity information from being centralized under a government or (single) private agency. The sysadmin for children would be parents, who are the only ones who would be providing the hardware, and that could work, especially if there was only the child’s account on the device (like a cell phone).

    I dunno if the above is horribly ignorant; if so, I’m open to being more educated on the topic.



  • That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.

    I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).

    Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.

    But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.









  • I think Microsoft, as they often do, see the writing on the wall—the AI bubble bursting soon, taking AI-only businesses with them. What I see in this is a play to, at best, buy some extra good will with Anthropic so they can be first in line for the acquisition when the latter are tanking, or at worst (and more likely imo), get them dependent on Microsoft for revenue so that they have no other choice to be subsumed by them.

    But I’ve been wrong about most economic/political predictions I’ve ever made, so we’ll see!