• 0 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • So they’ve highlighted an interesting pattern to compensation packages, but I find their entire framing of it gross and disgusting, in a capitalist techbro kinda way.

    Like the way the describe Part III’s case study:

    The uncapped payouts were so large that it fractured the relationship between Capital (Activision) and Labor (Infinity Ward).

    Acitivision was trying to cheat its labor after they made them massively successful profits! Describing it as a fracture relationship denies the agency on the Acitivision’s part to choose to be greedy capitalist pigs.

    The talent that left formed the core of the team that built Titanfall and Apex Legends, franchises that have since generated billions in revenue, competing directly in the same first-person shooter market as Call of Duty.

    Activision could have paid them what they owed them, and kept paying them incentive based payouts, and come out billions of dollars ahead instead of engaging in short-sighted greedy behavior.

    I would actually find this article interesting and tolerable if they framed it as “here are the perverse incentives capitalism encourages businesses to create” instead of “here is how to leverage the perverse incentives in your favor by paying your employees just enough, but not enough to actually reward them a fair share” (not that they were honest enough to use those words).

    WTF is “even safer” ??? how bout we like just don’t create the torment nexus.

    I think the writer isn’t even really evaluating that aspect, just thinking in terms of workers becoming capital owners and how companies should try to prevent that to maximize their profits. The idea that Anthropic employees might care on any level about AI safety (even hypocritically and ineffectually) doesn’t enter into the reasoning.


  • This reminds me of a discussion I had recently on a fanfic discord (the discussion was sparked by the March for Billionaires…). Someone claimed no country had ever pulled itself out of poverty except by capitalism, so I bring up China and the USSR, but apparently those don’t count for the person I was arguing with. They claimed the stats were Goodharted and also that what I was saying was tankie bullshit. I gave up at that point (I probably shouldn’t have bothered in the first place). Like how exactly did they fake or Goodhart going from literal feudalism to industrial superpowers? Also, I find it notable how EAs and “The Better Angels of Our Nature” type neoliberals are perfectly happy to use overall stats as metrics when it makes a point they are in favor of. “Your GDP went up 3.2%, please ignore the mass environmental devastation from colonialism and neocolonialism that makes your traditional way of life unlivable and thank us Westerners.”


  • A little exchange on the EA forums I thought was notable: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism?commentId=b5pZi5JjoMixQtRgh

    tldr; a super long essay lumping together Nazism, Communism and religious fundamentalism (I didn’t read it, just the comments). The comment I linked notes how liberal democracies have also killed a huge number of people (in the commenter’s home country, in the name of purging communism):

    The United States presented liberal democracy as a universal emancipatory framework while materially supporting anti-communist purges in my country during what is often called the “Jakarta Method". Between 500,000 and 1 million people were killed in 1965–66, with encouragement and intelligence support from Western powers. Variations of this model were later replicated in parts of Latin America.

    The OP’s response is to try to explain how that wasn’t real “liberal democracy” and to try to reframe the discussion. Another commenter is even more direct, they complain half the sources listed are Marxist.

    A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms.

    I think it’s a bit bold of this commenter to ignore the empirical facts cited in how many people ‘liberal democracies’ had killed and to exclude sources simply for challenging your ideology.

    Just another reminder of how the EA movement is full of right wing thinking and how most of it hasn’t considered even the most basic of leftist thought.



  • “How AI Impacts Skill Formation” has two authors. So even on the bare factual matters you are wrong. The disempowerment paper has four authors, but all of them look like they are computer scientists from looking at their bios, so the general thrust of fiat_lux’s comment is also true about that paper.

    I don’t mind academics reaching outside their fields of expertise, but they really should get collaborators with the appropriate background, and the fact that anthropic hasn’t hired any humanities researchers to help support this sort of research is a bad sign.




  • Multiple hackernews insist that SpaceX must have discovered new physics that solves orbital heat management, because otherwise Musk and the stockholders are dumb.

    The leaps in logic are so idiotic “he managed to land a rocket up right, so maybe he can pull it off!” (as if Elon personally made that happen, or as if a engineering challenge and fundamental thermodynamic limits are equally solvable). This is despite multiple comments replying with back of the envelope calcs on energy generation and heat dissipation of the ISS and comparing it to what you would need for even a moderately sized data center. Or even the comments that are like “maybe there is a chance”, as if it is wiser to express uncertainty…




  • Has anyone done the math on if Elon can keep these plates spinning until he dies of old age or if it will implode sooner than that? I wouldn’t think he can keep this up another decade, but I wouldn’t have predicted Tesla limping along as long as it has even as Elon squeezes more money out of it, so idk. It would be really satisfying to watch Elon’s empire implode, but probably he holds onto millions even if he loses billions because consequences aren’t for the ultra rich in America.





  • To add to your sneers… lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.

    I actually don’t mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didn’t include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they don’t know how to search existing literature/research and cite it effectively.

    45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored “AI safety”. Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?


  • Scott Adams rant was racist enough that Scott Alexander actually calls it racist! Of course, Scott is quick to reassure the readers that he wouldn’t use the r-word lightly and that he completely disagrees with “cancellation”.

    I also saw a lot of more irony moments where Scott Alexander fails to acknowledge or under-acknowledges his parallels with the other Scott.

    But Adams is wearing a metaphorical “I AM GOING TO USE YOUR CHARITABLE INSTINCTS TO MANIPULATE YOU” t-shirt. So I’m happy to suspend charity in this case and judge him on some kind of average of his conflicting statements, or even to default to the less-advantageous one to make sure he can’t get away with it.

    Yes, it is much more clever to bury your manipulations in ten thousand words of beigeness.

    Overal, even with Scott going so far as to actually call Scott’s rant racist and call Scott a manipulator, he is still way way too charitable to Scott.




  • I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware?

    You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezer’s originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.

    And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4’s goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesn’t make sense as written.


  • so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.

    I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.

    “Agent-4” will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

    You know, if there is anything I will remotely give Eliezer credit for… I think he was right that people simply won’t shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesn’t take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.