Paths Forward on Berkeley Culture Discussion
- This is a list of claims, each of which could be backed up with a full blog post
- 1A: Yay folk values
- Start with: folk values vs. mythic values
- The above post said a lot of things, but it also got a fair number of things wrong
- People with only folk values are capable of helping and should be encouraged to help
- We should put more effort into choosing good folk values, which make it easy for people to help
- 2: What is the mission
- 2A: The mission to preserve the mission
- Preserving your utility function is difficult, for both individuals and groups
- However, individuals and groups must work to preserve their utility function otherwise their utility function will end up as something else and they'll end up working at orthogonal purposes to their original mission
- What needs preserving is not the mission to save the world but the epistemic standards which allow rationalists to be effective at that mission
- 2B: The mission to save the world
- I'm guessing that this is an explanation of how is it that rationality and rationalists will actually benefit humanity, but this seems even more thinly explained than everything else
- 2C: How the mission was lost
- The group allowed its culture and utility function to shift over time
- Allowing non-core people to enter and change the mission of the group
- Also choosing goals solely on their ability to attract non-core people
- (Unnumbered) Why should the mission be preserved?
- 3: Regaining and preserving the mission
- 3A: Why play in hard mode
- Develop habits, tools, relationships, skills and assets that are aligned with your actual goal rather than a proxy for your goal
- Reduce vulnerability to Goodhart's Law
- 3B: Nuke Goodhart's Law from orbit, repeatedly
- 3C: Keep the wrong people out
- Keep your thing unattractive to the wrong people
- Is the part where we all collectively admit that HPMoR was a mistake?
- If you do so, then most of the wrong people will self-select out of your group, and you'll have a much easier time getting rid of the few wrong people who remain
- New York inadvertently did a good job with this
- The downside to keeping the wrong people out is that you have to deal with false negatives, and those have to be managed
- 3D: Become Worthy, Avoid Power
- Really? You're quoting Moldbug? 🤨
- 3E: Goodhart's law is the enemy
- It is behind many more of our problems than we realize
- Most of the things wrong with the world today can be traced, in one way or another, to Goodhart's Law
- Well, I'm glad we decided to nuke it, then. Certainly probably a more useful use of our time than declaring war on terror
- 3F: Teaching The Way By Example
- 4: Relationships and community dynamics
- 4A: Against polyamory
- Solving the nerd mating problem is going to keep you from doing other things
- Huge time sink
- Vastly increases complexity of community dynamics
- Makes utility function preservation even more difficult
- Okay, I accept all of these criticisms, but he hasn't even mentioned the reason that rationalists go towards polyamory, which is the gender imbalance in the community
- An inefficient, complex, suboptimal solution is still often better than no solution at all
- 4B: Bad money drives out good
- Stories about dysfunction from other communities and how the rationality community can learn from others' mistakes
- 5: Rationalists don't have a strong record of doing outward-facing things
- 5A: Much of this is due to fixable problems in the community, so we should fix those problems
- 5B: We need to keep projects in the community
- Maintaining cultural control is well worth a reduced chance of commercial success
- We must be able to preserve the culture of the organizations that we create, so that implies that we keep organizations inside the community, even if it stunts their growth somewhat
- 5C: When we deal with outsiders, we must strive to bring the same ethos of "extraordinary effort" that we apply to our own organizations
- 5D: Hire and be hired by your friends and network
- This is how most hiring happens
- The problem here is that until rationalists become competent, they won't hire each other, but the only surefire way to build competence is to get hired, even though you're crap
- Maybe this relates to 5B -- it's okay to have reduced chance of success if it means preserving the culture of the company/project
- 6: Why should rationality be separate from EA, and Zvi's thoughts on EA
- 6A: Big fan of Effective, but skeptical of Altruism
- The best charity in the world isn't AMF of the Gates Foundation, it's Amazon.com
- 6B: Potential New Effective Altruist ideas
- 6C: Altruism is incomplete
- Altruism also gets more credit for the good that it does than other mechanisms
- This unduly biases us towards viewing altruism as the most effective way to change the world for the better
- I don't think altruism gets credit because it's the most effective way to change the world for the better. I think altruism (and especially EA) gets credit because it's the only way to change the world that's at least somewhat resistant to Goodhart's Law. Say what you will about the effectiveness of capitalism, you can't deny that capitalism is extremely vulnerable to Goodhart's Law
- 6D: Can I interest you in some virtue ethics?
- 6E: Against utilitarianism
- What is good and what is worth thinking about?
- 6F: Keep EA weird
- A lot of the stuff that EA people think about are things that people don't think about because they judge them to be low probability, but would be extremely concerned about if they felt that they were imminent
- This is great and should be preserved
- If we fund this stuff we can make the world safer at a discount price
- On the other hand, a lot of the moral weirdness doesn't make sense to think about
- 6G: Suffering is bad
- 6H: Happiness is good
- 7: How EA's brand of utilitarianism has led us astray (speculative -- I still don't fully understand why he switched the top-level number here)
- 7A: Wrong conclusions are wrong
- If you're worrying about protons suffering when they repel each other, you've screwed up somewhere
- 7B: Life is good
- 7C: You can use long chains of logical inference, but they're pretty easy to mess up
- If you find that your logical reasoning leads to an absurd-seeming conclusion, it's more likely that your logical reasoning has a mistake than your conclusion being counterintuitively correct
- This is especially true when reasoning about suffering
- Need to define suffering and show why it's bad
- 8: Rationalists and history
- 8A: The past is not a dystopian nightmare
- 8B: Nature is not a dystopian nightmare
- If you look at the past and your utility calculation comes up with a negative value, you've messed up
- 9: Cooperating with people whose values are different
- 9A: How to cooperate with human paperclip minimizers
- Many EA causes are orthogonal to (Zvi's) values
- This true even of the less outlandish EA causes
- But that doesn't mean that we shouldn't help them
- 9B: Help people do things
- Helping people do things, even when you think the things they're doing are pointless gets them out of your way faster
- 9C: Stop paperclip minimizing
- This post will probably just lead to trouble
- Zvi is still tempted to write it
- 9D: What is good in life?
- To crush
your enemies Goodhart's Law, see them inadequate equilibria driven before you, and hear the lamentation of their women the kulaks rent-seekers
- 10: Motivation
- 10A: Yay motivation!
- Almost everything that we use to get motivated is now considered unhealthy and/or a cause of unhappiness
- This isn't wrong, but it is incomplete
- We need to add healthy ways to get motivated as quickly as we subtract unhealthy ways of getting motivated
- Otherwise we end up with... the current rationality community
- 10B: On Ambition
- Empty ambition is pretty toxic to happiness
- At the same time, it's pretty difficult to accomplish anything without some level of ambition
- Things that harm empty ambition also seem to harm ambition about meaningful causes
- We want people to feel somewhat sad about not doing more, but that sadness has to be inversely proportional to the amount of work they're already doing
- 10C: The altruist basilisk
- We need to deal with the notion that everything that you do for yourself is costing lives
- 10D: The notion that we are only entitled to spend the bare minimum on ourselves is an idea that is out to get us
- 10E: Finally we should bring back the notion of enforced rest