EA member trying to turn this into an AI safety sub
/u/katxwoods is the president and co-founder of Nonlinear, an effective altruist AI x-risk nonprofit incubator. Concerns have been raised about the company and Kat's behavior. It sounds cultish—emotional manipulation, threats, pressuring employees to work without compensation in "inhumane working conditions" which seems to be justified by the belief that the company's mission is to save the world.
Kat has made it her mission to convert people to effective altruism/rationalism partly via memes spread on Reddit, including this sub. A couple days ago there was a post on LessWrong discussing whether or not her memes were so cringe that she was inadvertently harming the cause.
It feels icky that there are EA members who have made it their mission to stealthily influence public opinion through what can only be described as propaganda. Especially considering how EA feels so cultish to begin with.
Kat's posts on /r/singularity where she emphasizes the idea that AI is dangerous:
- Microsoft Executive Says AI Is a "New Kind of Digital Species" (+152 upvotes)
- Stuart Russell says superintelligence is coming, and CEOs of AI companies are deciding our fate. They admit a 10-25% extinction risk—playing Russian roulette with humanity without our consent. Why are we letting them do this? (+901 upvotes)
- OpenAI's o1 schemes more than any major AI model. Why that matters (+36 upvotes)
- The phony comforts of AI skepticism - It's fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it's real and dangerous (+143 upvotes)
- "Everybody will get an ASI. This will empower everybody and prevent centralization of power" This assumes that ASIs will slavishly obey humans. How do you propose to control something that is the best hacker, can spread copies of itself, making it impossible to kill, and can control drone armies? (+87 upvotes)
- It's scary to admit it: AIs are probably smarter than you now. I think they're smarter than me at the very least. Here's a breakdown of their cognitive abilities and where I win or lose compared to o1 (+403 upvotes)
These are just from the past two weeks. I'm sure people have noticed this sub's veering towards the AI safety side, and I thought it was just because it had grown, but there are actually people out there who are trying to intentionally steer the sub in this direction. Are they also buying upvotes to aid the process? It wouldn't surprise me. They genuinely believe that they are messiahs tasked with saving the world. EA superstar Sam Bankman-Fried justified his business tactics much the same way, and you all know the story of FTX.
Kat also made a post where she urged people here to describe their beliefs about AGI timelines and x-risk in percentages. Like EA/rationalists. That post made me roll my eyes. "Hey guys, you should start using our cult's linguistic quirks. I'm not going to mention that it has anything to do with our cult, because I'm trying to subtly convert you guys. So cool! xoxo"