Rationality Reading Group

Contents

Hide

2019-02-11 Notes

How Civilizations Fall: A Theory of Catabolic Collapse

Introduction

  • The collapse of complex human societies is a poorly understood phenomenon
    • Most proposed explanations fail to describe causative mechanisms
    • Rely on ad-hoc hypotheses based upon the specifics of the civilization
    • Make essentially mystical claims (like all civilizations having natural lifespans like biological organisms)
  • Tainter (1988) proposed a general theory of collapse
    • Complex societies break down when increasing complexity result in negative marginal returns
    • Decrease in sociopolitical complexity yields net benefits to people in the society
    • How do you define sociopolitical complexity?
    • While this theory has strengths, it does not model the temporal aspect of civilizational breakdown
    • While Tainter maintains that this process takes place over a period of decades, many of the examples that he cites take place over a period of centuries
    • The shifts are often a progressive disintegration that takes place over centuries rather than a rapid shift from an unsustainable state to a sustainable one
    • The fall of the Western Roman Empire is even more difficult to square with Tainter's theory
      • Series of crises each leading to a loss of social complexity
      • Temporary sustainability at a less complex level
      • In many cases the level of sociopolitical complexity after the collapse of the Roman empire was lower than level of sociopolitical complexity that had been present prior to the area's inclusion in the Roman empire
      • Britain, for example, had a flourishing Iron-age society prior to its conquest by Rome, but after Rome's fall, Britain was depopulated, impoverished and politically chaotic for centuries
      • Broader question: why should we even talk about the collapse of the Roman empire?
      • I get that it's romantic and fun to think about, but why do we think that the collapse of Rome is at all applicable to modern civilization?
      • Moreover, empires can collapse without necessarily decreasing sociopolitical complexity: look at the fading of the great empires of the 19th century
      • Or the replacement of the Byzantines by the Ottomans
      • Or, to go back in time, the breakup of Alexander's empire and it's eventual division between Rome and the Parthian Persians
    • An alternative model based on perspectives from human ecology offers a more effective way to understand the collapse process
      • Theory of catabolic collapse
      • Models collapse as a self-reinforcing cycle of decline
      • Driven by interactions between:
        • Resources
        • Capital production
        • Waste

The Human Ecology of Collapse

  • At the highest level of abstraction, each human society involves 4 elements
    • Resources
      • Naturally occurring factors in the environment
      • Have not yet been extracted and incorporated into the society's flows of energy and material
      • Examples:
        • Natural resources, such as metal ores, petroleum, etc
        • Soil fertility
        • Human resources – people who could work, but who are not yet working
        • Scientific discoveries which can be made by the society's methods of research, but which have not been made
      • This is a radical oversimplification of resources
      • Necessary to allow us to discern large scale patterns
      • I don't know. It seems like this overbroad definition of resources tries to define anything that can be used or discovered once as a "resource"
      • I'm not sure that's correct, especially with regards to scientific discoveries
    • Capital
      • Consists of everything that has been incorporated into society's flows of energy and material but which is still capable of further use
      • Tools, buildings, machinery, productive farmland
      • Also includes social capital such as economic systems and social organization
    • Waste
      • All factors that have been completely incorporated into a society's flows of energy and material, but which are no longer capable of further use
      • Worn out tools
      • Laborers at the end of their lives
      • Information that is garbled or lost
    • Production
      • Process by which existing capital and resources are combined to create new capital and waste
      • Resources and existing capital can, to some extent, be substituted for one another, but the substitution is not 1:1 and the relationship is nonlinear
      • As the use of resources approaches zero, maintaining any given level of production requires exponential increases in the use of capital
      • Seems like a motte-and-bailey argument
      • Yes, technically this is true, if you count things like sunlight and wind as resources – the sun will eventually burn out, etc
      • However, I get the feeling that this is not what the author intends to mean
      • As a concrete example, what would Greer make of car batteries, which have been in a completely closed cycle for some time now?
      • In any given society, resources and capital enter the production process, and new capital and waste exit the production process
  • The maintenance of a steady state requires the formation of new capital from production to equal waste from production and capital
  • I'm deliberately leaving the equations out because I'm not at all convinced that any of the things the author is talking about can be quantified in the way that he intends
  • Societies which expand produce more capital than is necessary to maintain existing stocks
  • This becomes a self-reinforcing cycle – anabolic cycle
    • More capital than is necessary to maintain existing stocks is produced
    • This capital allows the production of even more capital
    • Positive feedback cycle results
  • The self-reinforcing part of an anabolic cycle is limited by two factors
    • Resource depletion:
      • All resources have a replenishment rate and a depletion rate
        • Replenishment rate is the rate at which resources are replenished or the rate at which new resources are found to substitute for the existing resources
        • Depletion rate is the rate at which resources are consumed
      • Resources that are consumed faster than they are replenished become depleted and must be replaced by capital to maintain production
      • Because of the nonlinear substitutability of capital and resources, an exponential amount of new capital is required to replace the depleted resource
        • Once again, I'm not quite sure what he's talking about with regards to "capital" and "resource", but I'm suspicious
        • Resource depletion is totally a thing, and things like declines in soil fertility and loss of rainfall (leading to a decline in the replenishment rate of water) have caused civilizations to collapse
        • However, that's a far more straightforward model than what he's proposing here – you don't need to model things in terms of abstract resources, capital, waste and production to realize that if your civilization was reliant on 36-40" of rainfall per year, and now you're in a mode of receiving 22" of rainfall a year, you're in trouble
    • Inherent relation between capital and waste
      • As capital stocks rise, the amount of production required to maintain the existing capital stocks also rises
        • Increased waste of capital outside of the production process
        • Increased waste of capital in the production of replacement capital
      • This seems… non-obvious
      • He asserts that as capital stocks rise, the amount of capital converted to waste outside of production also rises proportionally
      • This, specifically, is a non-obvious assertion to make
      • To go back to his example with regards to food waste – yes, food spoilage increases if food production increases and nothing else changes
      • But in practice, people find ways of converting the food into other forms or develop new forms of food storage and preservation that allow them to hang on to the surplus
      • All of what he's saying is true at a given level of efficiency, but efficiency isn't fixed
  • When an anabolic cycle ends, a society faces a choice between two strategies:
    • Move to a steady state where new capital production is equal to maintenance production, and depletion rate is equal to replenishment rate
      • Requires social controls to keep capital stocks down to a level where maintenance costs can be met from current production
      • Requires difficult collective choices, but as long as resource availability remains stable, controls on capital production remain in place, and society escapes major exogenous shocks, this process can be maintained indefinitely
        • But this will never happen. A state of stable resource availability, long-term controls on capital production and no major exogenous shocks is a state that has never occurred in this world. What he's describing is more akin to the world from dystopian fiction. Handmaid's Tale springs to mind
        • In reality, there are always exogenous shocks which make this strategy nonviable
        • Remember, exogenous shocks can be positive as well as negative
        • Moreover, this is a recipe for stagnation
    • Prolong the anabolic cycle
      • New technology
      • Military conquest
      • Other means
      • Since increasing production leads to increasing capital stocks (which inherently increases waste), this means that maintenance production must increase
      • Thus a society that attempts to prolong its anabolic cycle must increase its production at an ever increasing rate
      • This leads to problems with resource depletion
    • Okay, so far this is nothing more than a re-hash of the Marxist critique of capitalism, with more algebra
    • Moreover, I continue to be frustrated that he doesn't consider efficiency. He thinks that an increase in production necessarily requires an increase in input resources, when oftentimes that isn't the case
    • Finally, all of this is meaningless without a discussion of what the actual values of these limits are
    • What's amazing is that this paper, so far, has managed to combine some of the worst habits of both conventional and Marxist economics
  • If an attempt to maintain a steady state fails, society enters a contractionary phase, which may take one of two forms:
    • A society that uses resources at or below replenishment rates enters a maintenance crisis
      • Capital cannot be maintained and turns into waste
      • Physical capital is destroyed or spoiled
      • Human populations decline in number
      • Large scale social organizations splinter into smaller, more economical ones
      • Information is forgotten or lost
      • However, because resources are not depleted, maintenance crises tend to be self-limiting
    • A society that uses resources beyond their replenishment rates enters a depletion crisis
      • Key features of maintenance crises are amplified by the effects of resource depletion
      • Resource depletion reduces society's ability to produce new capital, just as maintenance requires more and more new capital production
      • This results in a catabolic cycle, where new capital production remains below production required for maintenance, even as both decline
      • While catabolic cycles may occur in maintenance crises, they tend to be self-limiting
      • However, in a depletion crisis, catabolic cycles accelerate to catabolic collapse, where new capital production approaches zero and most of society's production is converted to waste
      • I don't understand why catabolic collapse can't occur in a maintenance crisis
      • The ability to convert resources into capital is governed by one's existing capital
      • So if a maintenance crisis results in the destruction of capital, then one's ability to convert resources into capital is also affected, which leads to collapse
      • According to Greer, knowledge is capital. Having resources available doesn't help you if you don't know they're available and don't know what to do with them once you've discovered them

Testing The Model

  • Let the cherry-picking begin
  • The two types of collapse are ideal types
  • Most actual collapses occur in a range between these two
  • Maintenance crises
    • Kachin societies of Burma
      • Cycle from relatively centralized to decentralized forms without significant losses in physical, human or information capital
    • Historical China
      • Repeated cycles of unification and split into warring states
      • The sustainability of traditional Chinese agriculture meant that replenishment was high, and that any collapse was self-limiting
      • This is Chinese propaganda, by the way
      • While there are continuities among the Chinese civilizations, there are also significant differences
      • The Chinese government likes to say that they are an "unbroken 5000 year old civilization", but it's not clear that this is supported by historiography
      • Moreover, every nationalistic government says this – this is no different than Mussolini tracing Italian heritage back to Rome, Islamic fundamentalists tracing their heritage back to Muhammad, or British nationalists tracing their heritage back to pre-Christian druids
  • Catabolic collapse
    • Western Roman Empire
      • The Mediterranean society at the core of the empire was based on readily replenished resources
      • However, the empire itself was the product of military expansionism and easily depleted resources
        • Like what?
        • The problem with resource depletion theories of Roman collapse is that they very rarely mention which resources were depleted
      • After Rome's initial expansion (i.e. conquest of Gaul and Britain) all the remaining conquests were either resource-poor (Germans) or empires capable of defending themselves (Parthians)
        • Wrong! Rome totally conquered the Parthians and extracted significant amounts of tribute from them
        • He might be thinking of the Sassanid Persians, who were the successors to the Parthians, and who managed to kill one Emperor on the field of battle and capture another
        • That's a pretty huge miss, though, given that the Sassanid Persians were hundreds of years later than the Parthians
      • The collapse of Rome has an instructive feature which presents further support to the model
        • In AD 297 Diocletian divided the empire into Eastern and Western halves
        • Diocletian did no such thing
        • What Diocletian did was establish a second base of government, closer to the war front, so as to better manage his military campaigns against the Sassanid Persians
        • It was not at all his intent to divide the empire
        • With the death of Theodosius I in AD 395, coordination between the eastern and western halves of the Roman empire effectively ceased
        • The Western Roman Empire produced only a third of the revenue of the Eastern Roman Empire, but had much more territory to defend
        • The split essentially allowed the Eastern Roman Empire to convert large amounts of high-maintenance capital into waste, thus bringing its maintenance costs below its rate of new capital production
        • Further conquests by Muslim Empires also reduced the Eastern Roman Empire's new capital requirements
        • As a result, the Eastern Roman Empire survived for nearly a millenium longer than its Western counterpart
      • My problem with this is that it falls into the Gibbons trap of explaining the fall of Rome
      • It's all about what Rome did
      • However, later historiography and archaeology has shown that Rome was not an all-powerful actor brought low by its own decadence and weakeness (Gibbons) or by its territorial over-extension (Greer)
      • There were external demographic shifts – there was a massive population increase along the Rome/Germany border, due to migration from what is now Russia
      • The Sassanid Persians replaced the Parthians, causing Rome to face its first near-peer military threat since Fabius defeated Hannibal
      • There were climatic shifts, which made it more difficult for Egypt and the Black Sea colonies to produce grain
      • And even in the face of all this, Rome managed to plow forward for nearly 500 years
    • Lowland Classic Maya
      • Mayan population and agriculture grew beyond a level that could be supported by the nutrient-poor soils of the Yucatan lowlands
      • Mayan polities created massive building projects which did not contribute to further production
      • The result was a rolling collapse over two centuries, in which urban centers were abandoned to the jungle and populations declined preciptously
      • The Lowland Classic Maya collapse was preceded by at least two other similar breakdowns
      • Unclear whether these were maintenance crises preceding the final catabolic collapse or whether there's some other explanation
      • The problem with using the Maya as evidence is that there's so little knowledge about them, and what little evidence there is can be interpreted in such a broad range of ways
      • The Lowland Classic Maya are an archaeological Rorscharch Test – they have evidence for whatever pet theory you want to push
      • It's like talking about the Minoan (Linear A) people
      • Another theory about the Mayan collapse is that it was a change in rainfall patterns which doomed them, much like a change in rainfall patterns doomed the Pueblo peoples
  • Features of long-lasting societies often have measures to reduce the growth of capital
    • Aspects of the potlatch economy
    • Ritual deposition of prestige metalwork by Bronze and Iron-age peoples of Western Europe
    • Once again, I'm not certain what Greer means by "capital" – the things that were deposited by Bronze and Iron-age peoples in Western Europe were things like jewelry
    • Jewelry is "capital" according to Greer, but it's not what most other people would consider to be capital
    • While these features often have other meanings to the societies which adopt them, the fact that socities that adopt them survive for long periods of time means that societies that adopt these practices tend to survive for longer periods of time that socities that don't
    • Do they? How long did these bronze and iron-age societies survive?
    • A lot of the "sustainable" practices of Native American society are actually relatively recent inventions (i.e. 150-200 years old) that were thought to be ancient because of the "noble savage" myths that Europeans imposed upon the societies they encountered
    • It's not clear that these practices were actually sustainable over a period of thousands of years – they don't have the track record to prove that
  • Okay, so Greer has talked about the big hits (Rome, Maya)
  • But what about the Late Bronze Age Collapse
  • More broadly, what separates an empire from a civilization?
    • Did Greek civilization collapse, or was it subsumed into the later Roman civilization?
    • Did Roman civilization collapse, or was it subsumed into the later Byzantines empire
    • This is more pertinent question for China, where its various dynastic collapses and conquests straddle the border between empires forming and dying, and total civilizational collapse

Conclusion: Collapse As A Succession Process

  • Even within the social sciences, the process by which complex societies give way to smaller and simpler societies has been described with the language of literary tragedy
  • This is understandable, given the cultural and human costs involved, but it conflates description of the facts with a value judgment
  • A less problematic approach is to use the concept of ecological succession
  • Succession describes the process by which an area not yet occupied by biological organisms is colonized by a series of seres, or biotic complexes, which each sere being replaced by a later one until a stable self-perpetuating climax community is reached
    • Okay, except that the entire concept of a "climax community" has been pretty well rejected by later ecologists
    • The notion that nature progresses towards an ideal "climax" is an example of anthropomorphic fallacy that was created by early ecologists at the turn of the 20th century
    • Later ecological research as shown that so-called "climax" ecologies are often much less stable than they appear and shift unpredictably between a number of equilibria, since they're chaotic systems
    • There's also the fact that this is warmed over Hegel/Marx. Socialism is inevitable because it is the k-selected ecological successor to r-selected capitalism
  • Earlier seres tend to use r-selected reproductive strategies, maximizing the rate of resource acquisition, while later seres tend to use k-selected strategies, maximizing the efficiency of resource utilization
  • While human socities cannot be directly compared to biological seres, there are certain similarities
  • However, unlike other species, humans can change their strategies – the same humans can be r-selected in one culture and k-selected in another
  • And they are! Look at the steep declines in birthrates in India and China, for example. China's total-fertility rate has fallen from something like 6 to 1.7!
  • And it's tempting to say that that was because of the one-child policy but:
    1. India has experienced a similar decline in birthrate, without imposing any such policy
    2. China relaxed its one-child policy quite some time ago, and birthrates didn't go up
  • There's a subtext here that capitalism is bad, that capitalism is an r-selected strategy, but in reality, capitalist economic development has been the #1 cause of declining birthrates in the world
  • At the top, he calls out other theories of civilizational collapse for making "mystical" claims, but honestly, I don't find his theory to be any less mystical than the theories he criticizes
  • Sure, it's incorrect to compare civilizations to individual biological entities… that doesn't make it correct to compare them to ecologies

2019-02-04 Notes

Four Things You Already Agree With (that mean you're probably on board with effective altruism)

  • Four ideas you probably already agree with
    • I have learned to be very suspicious when someone says that an idea is self-evident
    • It's important to help others
      • When people are in need and we can help them we should
      • And we already fail the self-evident test
      • The capacity to help implies the obligation to help? Really? You just made that leap without any justification whatsoever.
    • People are equal
      • Everyone has an equal claim to being happy, healthy, fulfilled and free
      • Do they? I mean, I believe that they do, but I know that neoreactionaries would strongly disagree
      • Heck, not even neoreactionaries – most people would agree that criminals, for example, have given up their right to be free
    • Helping more is better than helping less
      • We should save more lives, help people live longer, and help people live happier lives
      • The difficulty comes when those goals conflict
      • Imagine if you have enough medicine to save 20 people, and no reason to conserve – would anyone choose to save only some of the people, instead of all 20
      • Yeah! Lots of people, in practice, would choose to withold medicine rather than give it to someone who was "undeserving", where "undeserving" can be defined as anything from, "Is a murderer," to "Is of the wrong ethnic group."
    • Our resources are limited
      • We can only spend a certain amount of our resources on charity without causing strain on our own resources
      • Choosing to spend time or money on one option is implicitly choosing to not spend it on other options
  • These four ideas are pretty uncontroversial
    • The more you say that, the less I believe it
  • In fact, defending the opposite positions would probably be difficult and uncomfortable:
    • Helping others isn't morally required or even that good
    • It's okay to value people differently based on arbitrary differences like race, gender, ability, etc
    • It doesn't matter if some people die even if it doesn't cost anything to save their lives
    • We have unlimited resources
    • Okay, I'm not Obormot, but even I can come up with adequate defenses of the first three
      • Helping others isn't morally required, or even that good
        • Is helping others really the most moral thing you could be doing?
        • What about paying attention to your own moral development?
        • Maybe the most moral thing you could be doing is improving your personal moral worth, by living a particular lifestyle, or venerating God
      • It's okay to value people differently based on arbitrary differences like race, gender, ability, etc
        • Whether someone is or is not related to me is an "arbitrary difference", but you are insane if you believe that someone will give a stranger equal weight to their brother/sister
        • And we can scale that up – is it wrong to favor your cousin over a stranger? Is it wrong to favor your neighbor over a stranger? Is it wrong to favor someone from your town over a stranger? Is it wrong to favor someone from your country over a stranger from a foreign land?
        • I might be committing the is-ought fallacy here, but I think it's definitely not self-evident that it's wrong to value people differently based upon "arbitrary differences", when the vast majority of people in the world do exactly just that
      • It doesn't matter if some people die even if it doesn't cost anything to save their lives
        • We live in a country which still practices the death penalty
        • Not everyone has an absolute right to life
        • If a convicted murderer on death row gets prostate cancer, which will kill him in 35 years, is it worth investing resources to treat the cancer when the murderer's execution is scheduled for 5 years from now?
  • If these four ideas embody important values, then the way we're thinking about doing good is probably totally wrong
  • In order to be true to the value above, we need to think about how we can help the most people with the resources that we have
  • The difference in impact between causes is huge – the most impactful charities can have hundreds of times of the impact of the least impactful
    • It's the difference between helping one person and helping hundreds of people, for the same amount of money
    • Okay, but how are you defining "impact"
    • Is it lives saved?
    • What about a charity that brings art resources to people living in poverty?
  • A charity chosen at random is probably not making as much impact as the most effective charities
    • There's a bit of confusion going on here between charities and cause areas
    • A charity devoted to the arts might reach ten times fewer people than AMF, but that's because the arts charity is in a different cause area
    • If you're going to compare charities, and claim that the best charities reach tens or hundreds of times as many people as a randomly chosen charity, you need to make sure that you're comparing a randomly chosen charity to the best charity within that cause area
  • The charities that we donate to are, in effect, chosen at random – we choose charities based upon what we have exposure to
  • Every worthy cause should be on the table
    • Climate justice
    • Animal sanctuaries
    • Preventing easily treatable but unpronounceable diseases in places we've never heard of and will probably never visit
  • Trying to be cause neutral is really difficult
    • It's hard to not favor causes that have had a personal impact on you or your family
    • Well, maybe that difficulty is a sign that your "self-evident" axioms really aren't all that self-evident
  • However, if we care about treating people equally, we should care about treating their experiences equally
  • We need to treat all death and suffering as a tragedy, not just the death and suffering that we happen to see
    • Man, I understand why rationalists have such problems with anxiety now
  • Effective altruism is a way to better uphold the values that you already have
    • Uh… no
    • Effective altruism is a way to change my values into a particular form of utilitarianism, which seems geared to give me an anxiety disorder
  • EA asks us to face up to some hard choices, but we're making those choices anyway, whether we think about them or not
  • Even though it might feel difficult to not donate to a charity that seems worthy, you should always remember that you're trading off causes against one another
  • Standard pitch for donating mosquito nets goes here

What Is Utilitarianism

  • Utilitarianism is a collection of philosophical positions which have 5 major characteristics in common
  • Utilitarianism is the doctrine that the morally right thing to do is that which maximizes Utility
  • Characteristics utilitarianism
    • Universalism
      • Moral principles are universal
      • Same moral standards apply to all people and all situations
      • Most philosophies since the Enlightenment have been universalist
      • The utility of all people is important, and is in fact assumed to be equally important
      • However, many people hold that the utility of people who are close to one, such as family and friends, matters more than the utility of people far away
    • Consequentialism
      • What matters, morally speaking, is the consequences of actions
      • Actions aren't inherently good or bad in and of themselves, they are good or bad based upon their outcomes
      • This is a fairly controversial point – many people hold that there are actions which are wrong, regardless of their consequences
    • Welfarism
      • Good consequences are that which improve the well-being of specific people
      • Well being is defined subjectively, and the definition differs between specific utilitarian philosophies
      • A belief opposed to welfarism would hold that there are principles which are important, even if they don't benefit anyone in the particular situation being discussed
    • Aggregation
      • Utilitarianism is an aggregative philosophy
      • What is good overall is the aggregation of what is good for each and every individual
      • Aggregation is controversial, as it implies that the welfare of different people can always be compared
    • Maximization
      • Utilitarianism is the most famous maximalist philosophy
      • Holds that if something is good, then it is better to have more of it
      • A non-maximalist philosophy would hold that it can be wrong to do something even if it would reduce the total amount of wrong in the world
  • Given these characteristics, utilitarianism holds that
    • Morality of actions is solely judged by how those actions maximize utility
    • Utility is the welfare of individual people from the perspective of those people
    • One person's welfare is as important as another's
  • Non-utilitarian philosophies hold that
    • Actions can be right or wrong regardless of their consequences
    • Some consequences are good, even if they do not increase the welfare of any individual
    • We should promote welfare in some way other than maximization

The Drowning Child and the Expanding Circle

  • Imagine you're walking past a shallow pond and you see a child has fallen in
  • Do you have an obligation to rescue the child, even though it would result in your clothes getting ruined and you being late to school/work?
  • Most people would answer yes
  • Does it make a difference if the child is far away, in another country
  • If you can save a life at a trivial cost to yourself, don't you have the obligation to do so?
  • At this point, most people challenge the practicalities
    • Can we be sure that the donation will actually get to those in need?
    • Isn't the real problem something else, like growing world population
  • Hardly anyone, however, challenges the underlying ethics
    • I'll challenge the underlying ethics
    • I do maintain that it makes a difference that the person whom you're trying to save is in another country
  • The 20th century is the first century in which it's been possible to speak of a global community and global responsibility
  • For most of human history, there was simply no possible way for a person to make a difference for someone else living hundreds or thousands of miles away
  • Advances in communication and transportation have changed that
  • It's now possible to see and affect lives that are halfway across the world
  • Not only is it possible, we are affecting the lives of others and the natural world in which we live
    • Ozone depletion
    • Global warming
    • The actions of a person in Los Angeles can have deleterious effects on a person in Adelaide
  • The modern world is also lacking in meaning and fulfillment – capitalism's only message is consume, and earn more to consume more
  • We cannot see it as our end to acquire more and leave behind an ever larger heap of waste
  • Identifying with other, larger goals lends meaning to our lives, and reconciles ethics with self interest
  • If we can identify our self-interest with the larger interests of humanity and the natural world as a whole, then we are freed from the need to consume ever more in order to get ahead of our peers
  • LessWrong discussion of this essay
    • The primary objection seems to be that Singer, through some inductive sleight of hand, has turned a definite, limited obligation into an indefinite unlimited obligation
    • It's one thing to say that one day you come across a drowning child, and you're obligated to ruin your clothes in order to save him/her
    • It's quite another thing to say, "Every day, when you walk past this pond, a different child is drowning, and every day, you jump in, without regard to your own clothes and needs in order to save this child."
    • The Upstream Parable (linked from the aforesaid LW discussion)
  • Aside from the points raised in the LessWrong discussion, I find it funny that Singer is criticizing capitalism
    • Like it or not, China's adoption of capitalism lifted something like 300,000,000 people out of poverty and into a middle-class lifestyle
    • I have yet to see a charity that has lifted even 3,000,000 people out of poverty

If you want to disagree with effective altruism, you need to disagree with one of these three claims

  • Effective altruism is often motivated by referring to Peter Singer's pond argument
  • This is a mistake
    • Associates EA with international development
    • Makes it appear that if you can refute the pond argument, you can refute the arguments for EA
  • EA is justified by the "general pond argument"
    • The original pond argument is:
      • If you can help others a great deal without sacrificing something of similar significance, you ought to do it
      • We can help the global poor a great deal by giving to effective charities
      • Therefore, we ought to give to effective charities until it becomes a great sacrifice
    • This leads to the objection of wondering whether international aid really helps the global poor
    • However, one can deny the importance of international aid, and still accept the importance of EA
    • As long as there are some actions which benefit others a great deal, but which cost ourselves little, EA will be important
    • This leads to the "general pond argument"
      • As long as there are actions which benefit others a great deal, but which cost us little, we should do them
      • Some of these actions are not widely taken
      • We can find out about these actions using evidence and reason
      • Therefore, there are cost-effective and highly beneficial actions which we could be taking, but are not
        • Is he just assuming the conclusion?
        • His first premise implies that these "pond-like" actions exist
        • His second premise outright states that some of these actions are not widely taken
        • So his conclusion is literally, "Cost effective and highly beneficial actions exist, and are not taken, because my premises state as much"
      • The mission of effective altruism is find these actions and funnel resources towards them
  • Why do we think there are going to be lots of these cost-effective actions?
    • Global inequality
      • College graduates in developed countries are 100 times as rich as the global poor
      • That means that these people could do 100 times as much good by helping the poor than by helping themselves, just by transferring their income
      • Wait, what? How does that follow? Did you just assume that outcomes scale linearly with money?
      • Moreover, there are probably ways of helping that are more efficient than income transfer, so in reality the ratio is probably greater than 100x
      • Are there actually such ways helping? A lot of economics research has shown that if you want to help the poor, straight cash transfers are probably the best way
    • Moral concern for animals
      • Animals have no political or economic power
      • Historically, people have not cared about animals' interests
      • By doing something simple like going vegetarian, you personally prevent about 100 animals from being killed each year
    • The ability to affect the future
      • There will be many more people living in the future than are alive today
      • If you believe that we should have moral concern for future generations and believe that our actions today can affect them, then there relatively small actions that you could take today, which would have massive consequences for future generations
      • The problem there is determining which actions. There are lots of actions which seem good, but which have deleterious second or third order consequences
        • For example: opposing sweatshop labor
        • The first order consequence is good, but if the company closes the sweatshop rather than comply with better calls for better working conditions, people often end up going back to subsistence farming, which leaves them less well off
        • So, yes, I agree that there are small actions that you could take today which will leave future generations much better off. The problem is determining what those small actions are
    • The possibility of leverage
      • If you focus on finding the best ways to help others, you can often find ways of doing good which are more effective than just doing good things yourself
      • If you think some action A is good, then you can probably get 10 people to do A
        • Whaaaaaaaa? No, that does not follow at all! Has this dude ever tried to organize anything? It absolutely does not follow that just because some action is good, you can get 10 people to do the good action with you
    • Poor existing methods
      • Many current attempts to do good aren't very strategic or evidence-based
      • There are probably ways to do good which are 10 or even 100 times better than what people normally focus on
      • Again, the argument jumps far beyond what the evidence supports
      • I agree that current ways to do good aren't necessarily very strategic
      • That does not imply that the "best" ways to do good, however "best" is defined, are 10 or 100 times better than current methods
  • How not to refute the importance of effective altruism
    • To disagree with effective altruism, you need to disagree with one of the parts of the "general pond argument"
    • Most critiques of EA fail to hit the mark
    • Common failure modes
      • Equating effective altruism with utilitarianism
        • EA rests on a much weaker moral claim than utilitarianism
        • EA merely says that you ought to do actions that are a great benefit to others with little cost to yourself
        • In contrast utilitarianism says that you ought to do an action that's a major sacrifice, as long as it does slightly more good to others
        • This is a much stronger claim
        • Is it? It seems to me that the difference between EA and utilitarianism is a mere difference in degree, not a difference in kind
        • Utilitarianism also denies that anything matters except welfare and that it's okay to violate rights in favor of the greater good
        • What? You're straw-manning utilitarianism here. There are variants of utilitarianism which do admit the notion of inviolable rights
        • Heck, I'm not a utilitarian, nor am I especially supportive of EA, and even I don't strawman utilitarianism like this
      • Arguing that a specific action is not cost-effective and high-benefit
        • This is not a general critique of EA
        • It's a contribution to help EA find the best missions to support
      • Saying that EAs think you should only support charities that have randomized controlled evidence behind them
        • RCTs are just a tool
        • There are probably other ways to identify effecitive charities
  • What types of criticism might hit the mark
    • Deny the moral claim
      • Implies that you would let a child drown in a pond in front of you
      • You monster
    • Show that there's an important moral difference between saving the child drowning in the pond and all the other cost-effective actions to save human life that you could be taking
      • This is much more difficult than showing there's a difference between any specific action (like donating to charities) and saving the child in the pond
      • Wait, how does this claim follow?
      • I don't have to show that all possible EA supported actions are different than saving a child drowning in the shallow pond, I just have to show that the all the proposed EA supported actions are different than saving a child drowning in the pond
      • Otherwise, the claim becomes merely, "There exists some other action which is equally cost effective and morally obligatory as saving a child from drowning in a pond," which is a much weaker claim.
      • It's a motte-and-bailey argument. The motte is, "There exists at least one way of helping people which is very cost effective." The bailey is, "Go donate to AMF, or the worm charity, or wild-animal suffering research, because it is that way."
    • Accept that effective altruism is correct, but deny that the effective altruism movement will do much good
      • Porque no los dos? EA is incorrect and the movement won't end up doing much good
  • Conclusion
    • Discuss a wider range of actions than just donating to international health charities
    • If we can communicate the idea that there exist cost-effective ways to save lives then we can make the case for EA in a much more robust fashion than by focusing on specific actions
  • My thoughts
    • Not only did this not address any of the objections from the discussion of Singer's pond argument on LessWrong, it managed to introduce new weaknesses
    • That's some next level failure at logical reasoning

On Caring

  • It's difficult to feel the size of large numbers
  • A billion feels just a bit bigger than a million, even though it's a thousand times bigger
  • This is related to scope insensitivity
  • It matters because sometimes the things you care about are really numerous
  • Billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease
  • Even though the vast majority of them are out of sight, I still care about them
    • Why?
    • This is a legitimate question – why do you care? It just seems like a really good way of giving yourself anxiety issues
    • Moreover, what moral force makes it your responsibility to help them?
  • Knowing that, Nate Soares cares about every single individual on the planet
  • The problem is that the human brain is simply incapable of taking the amount of caring it can feel and scaling it up to encompass the entire planet
    • Maybe that's a sign that you shouldn't be doing that
  • Caring about the world isn't about having a gut-feeling that corresponds to the level of suffering in the world
  • It's about doing the right thing anyway, even without having that feeling
  • We're playing for incredibly high stakes
    • Billions of people suffering today
    • Trillions or quadrillions of people who will exist in the future
  • When faced with stakes like these, your internal heuristics completely fail to grasp the situation
    • But here's the thing: let's say that your internal heuristics do grasp the situation
    • Let's say that you are capable of feeling the entirety of human suffering
    • What does that get you?
    • Won't that just leave you a non-functional wreck, sobbing every minute of every day, because some peasant in Africa stepped on a nail and is now dying of tetanus?
  • Saving one life feels just as good as saving the entire world
    • Saving one life actually probably feels better than saving the entire world, at least that's the impression I get from the interviews with Stanislav Petrov
  • There's a mental shift that happens when you internalize scope insensitivity
    • Notice that most charitable donations are made in a social context
    • We agree that people should donate to charity, but when we see someone give everything to charity, we think they're crazy
  • When some people internalize scope insensitivity, they freeze up
    • See that there's no way that they can do anything to affect the world's problems
    • Freeze up, since there are so many problems and so little time to affect them
    • Honestly, his description of Daniel's thought process reads more like the description of a mental breakdown or anxiety attack
    • Literally this t-shirt
  • Most of us go through life understanding that we should care about people far away, but failing to care
  • But this is an error – we should donate despite not caring
  • There is no way you can care enough to use "care" as a motivation to be altruistic
  • So, if you can't use how much you care as a heuristic and you can't use social pressure as a heuristic, what do you do?
    • Not sure yet
    • GiveWell, MIRI, FHI, etc are all efforts at answering this question
  • It's easy to look at virtuous people and conclude that they must have cared more than we did
  • But that's probably not the case
    • Uh, it actually probably is
    • This is Nate Soares typical-minding again – look, dude, just because you're an uncaring, unfeeling, perfectly rational and 100% Effective robot, doesn't mean that everyone is
    • Martin Luther King? He cared about the plight of black people in the Jim Crow South. He cared, and he was angry. That's what motivated him to go out there every day, and put his life on the line, day in and day out, to try to get reforms pushed through
    • Same with Mandela, Gandhi, Mother Teresa, etc. They all cared. None of them did this, "Oh, I can't possibly care enough, but I'm going to do the right thing anyway," dance that Nate is describing
  • Nobody can care enough to comprehend the problems that we face
    • Sure, no one person can care enough. But maybe if enough people care just a little bit, then change can occur
    • In my view, the biggest obstacle to real improvement in people's lives in the developing world isn't that strangers in West don't care enough about them
    • In my view, the biggest obstacle is a mindset that today must be like yesterday and yesterday must be like tomorrow. Once people start caring, not about the world, but about themselves and their own situations, change proceeds rapidly
  • Instead of relying on caring, we should rely on doing the multiplication, and then doing what the math tells us to do despite not caring

2019-01-28 Notes

The Tower

Part I

  • If you read the Bible, you'll see that the people of Babel didn't build tower of Babel to reach the heavens
  • They built the tower as a symbol, to bring people together
  • The Tower of Babel was the first Schelling Point
  • When God descended to Earth to see what people were up to, he found that they were of one language, and thus there was nothing that they couldn't do
  • So God scattered the people and gave them different languages, in order to weaken them
  • God was scared

Part II

  • The vast majority of amateur philosophy is just reinvention
  • Your thoughts about the meaning of life and ethics, etc have already been written down, debated, and worked over, probably thousands of years ago
  • That said, thinking about the questions of philosophy is still important – it's still import to try to find the answer yourself, even if the answer you find is unsatisfactory, and isn't even original
  • The question is: why don't people choose to be happy?
    • For example, take the wireheading scenario
    • There exists a machine that will make you feel not just pleasure, but true happiness, forever
    • The machine never stops working, has no side effects, and induces no tolerance, and isn't physically addictive in any way
    • But once people start using it, no one ever stops
      • Okay, see this is Hotel Concierge trying to sneak a contradiction past us
      • He's saying, denotatively, that the machine isn't addictive – it's perfectly safe, you can stop at any time
      • But connotatively, he's saying the exact opposite – this machine is incredibly addictive, it is in fact the most addictive thing ever created, it makes black tar heroin look like black tea by comparison
      • Now, both of these scenarios are logically valid. I can imagine a world where people hook themselves up to these machines and never stop. I can also imagine a world where people treat these machines as a sort of recreational drug – they hook themselves up every once in a while, and then return to the "real world" afterwards. But these scenarios cannot be simultaneously true.
    • Would you hook yourself up to such a machine?
      • Depends on which world I'm in
    • Alternative formulation of this scenario: what if you were offered a free trip to a virtual reality paradise. However, upon your return, all memories of the trip are erased, all skills unlearned, etc.
    • Would you still go?
    • Both of these choices are fundamentally the same: ecstasy that leaves no trace, compared with bland but tangible reality
    • If you would spend one year in the Matrix, why not two? Why not twenty? Why not the rest of your life
      • Because my life isn't a slippery slope fallacy
  • These concerns aren't theoretical
  • Kahneman has demonstrated that there's a big difference between the "experiencing self" and the "remembering self"
  • Except Kahneman doesn't take the concept far enough
    • Kamikazes and suicide bombers don't die for their own remembering selves, they'll die for other remembering selves
    • And this value has been valorized in one way or another since antiquity
    • The remembering self doesn't want quality of life, it wants quality of death – it wants a life that others will look back on and say was a good life
    • The remembering self is entirely willing to sacrifice massive amounts of short term happiness for even a chance at a legacy
    • Utilitarianism is really bad at capturing this impulse
      • Is it? Not all utilitarianism is hedonic utilitarianism!
  • This ties into the notion of "memes" and "cultural evolution"
    • Memory is a collection of memes
    • The remembering self is our record of working towards the interests of certain memes and against others
    • Our "free will" is merely us choosing from among the memes that surround us
    • Is this Jung? Is he restating Jung in different words, or is he pointing at something different?

Part III

  • The first dichotomy, according to Freud, separates ego-instincts from object-instincts
    • Okay, but this is Freud, so we have approximately zero evidence that this corresponds to anything other than Freud's own analysis based on his own non-observations
  • Ego-instincts are necessities
    • Examples:
      • Hunger
      • Thirst
      • Respiration
      • Fatigue
      • Crude sexual desire
    • Newborns treat the entire world as an extension of their ego-instincts
  • Object-instincts are that which develop as we realize our abilities and begin to direct them outwards
    • Freuds second dichotomy divides object instincts into to categories Eros: the love instinct and Thanatos: the death instinct
    • Eros
      • Not just love but more like belonging and acceptance
      • The feeling of being truly recognized and accepted by another
    • Thanatos
      • Not death, but control - ananke
      • I don't know if he's using fancy Greek because it's legitimately a better way of expressing what he wants to express because he just likes the feeling of writing fancy Greek and lording it over his readers
      • Self-destruction is the ultimate expression of control
      • Thanatos is the compulsion to learn and control
  • According to Freud, the id is what we want
  • The superego chooses how we go about wanting
  • How do we make those choices?
  • At first, ananke drives development
    • The early developmental history of children is one of them assuming more and more control over themselves and their environment
  • Then, once we learn object permanence and develop memory, we become subject to operant conditioning
  • Operant conditioning is what allows social and civilizational memes to colonize our minds
  • These inculcated memes cooperate and compete for the real estate of the mind
  • This process of taking in memes and incorporating them into our existing mind is how semantic memory is created
    • Okay, this is all plausible, but I will note that his citations are to Wikipedia articles that have a bunch of flags on them for potential inaccuracy
    • Who knows if any of what he's saying is even approximately true
  • The final algorithm that governs our actions must simultaneously satisfy both Eros and Ananke to some extent in each moment
    • That is, from moment to moment, the id's desires and impulses can't be totally neglected
  • However, the remembering self will use the superego's algorithm when looking back on its actions and assigning meaning to memories
  • It's the superego, not the id that answers the question, "Did I do what I really wanted?"
  • The remembering self doesn't really care which goal you pick, only that there is a goal
    • This is why minimum wage jobs are so unsatisfying
    • No one cares if you do well
    • If you screw up, there's no correction, you're just fired and replaced with the next willing person
  • The way to a well-lived life is to pick a goal and pursue it
    • The role of adolescence is to look around, explore, and find a goal
    • The role of adulthood is to switch from explore to exploit and push towards the goal that you've chosen
    • I guess it would be beside the point here to say that modern adolescence is a cultural construction
    • Yes, adolescence is culturally constructed, but that's the point – our culture, for probably the first time in human history, allows the majority of people some kind of a choice in their life path
    • As a result, we've come up with this new life phase – adolescence – to deal with the fact that you have to choose what you want to be/do in adulthood
  • Yes, the goals are abitrary and meaningless, but the path to a well-remembered life is to have one
  • Happiness and meaning – sometimes they overlap and sometimes they conflict
  • There's no right choice between happiness and meaning, but one should be careful to avoid the trap where one achieves neither
    • The id is terrible at long-term hedonism
    • The default superego is full of malignant memes that will leave you miserable, and will also leave your autobiography incoherent
  • The key word above is default
    • We all have some degree of protection from malignant memes, either through isolation or through positive memes that have been inculcated in us by parents or society
  • However, most of us don't take enough precautions when dealing with new memes
  • Judaism is interesting, as a religion, because it tries to enforce memetic hygiene through its many arbitrary rules and its strictures against proselytizing
    • Rules create an obstacle course that memes have to pass through before they're deemed acceptable
    • Strictures against proselytizing reduce exposure to foreign memes
  • A free flow of information reduces memetic hygiene – it's the equivalent of a suppressed immune system
  • Secular humanism is a motte-and-bailey
    • Milquetoast ideals
    • Provide no guidance in day-to-day life
    • Leave you vulnerable to whatever crypto-ideology is most virulent
    • I think his argument is itself a motte-and-bailey
    • Secular humanism has no answer to what you should do when someone slaps your girlfriend's ass at the club? Really? That seems like a real strawman
  • With a free flow of information, ever meme attains its most virulent form
    • Free flow of information = low memetic defenses
    • Thus the fastest, most virulent memes will attain the greatest success
    • So Facebook isn't cancer… it's AIDS
  • Memetic selection is even less likely to induce cooperation between memes than natural selection
    • Unlike genes in natural selection, memes aren't spread together in chromosomes
      • Genes aren't spread together in chromosomes either – check your eukaryote privilege!
      • Bacterial genes can and do spread one at a time – this is how we discovered CRISPR-Cas9
    • A person can only spread one meme at a time, so there is a significant disincentive for memes to cooperate

Part IV

  • References Gwern's Culture is not about Esthetics
    • The argument in Culture is not abot Esthetics is that producing new fiction works should not be subsidized and may in fact be actually harmful
    • People are primed towards novelty, and new fiction takes away from older works which are of significantly higher quality
    • Every person has 500,000 hours – shouldn't they spend those hours on what will bring them the most enjoyment?
    • We shouldn't encourage the production of new fictional works, and indeed we should discourage it – there is already too much fiction, and adding more to the pile only makes things worse
  • There are some flaws with this argument
    • It assumes that you can rank art, along some kind of unified scale
    • If people won't read sensationalist fiction, they'll read sensationalist non-fiction, and that's no better (in fact, it's probably worse)
    • Moreover, what will all these people do in their spare time, if art is illegal? Art begets art, in reaction
    • Fiction teaches moral lessons, insofar as it shows you how moral principles apply to everyday life, and allow you to imagine how you'd react in scenarios that you haven't yet encountered
    • We face a lot moral dilemmas every time we interact with other humans, and modern literary fiction offers us a manual on how to navigate those dilemmas
    • Modern works of art translate ancient principles into modern language, allow people to relate those principles to their times and circumstances
    • Art is compressed communication
    • The limitations of an artistic form strip out unnecessary information, and allow the artist to convey an impression of some kind of greater pattern to the viewer
    • Art exists for its own sake, it exists to be understood

Part V

  • "Ease of having one's art understood" is the definition of privilege
    • Really? Because modern art is inscrutable as hell, and it's made and consumed by the most privileged people in society
  • The academic notion of privilege fails because it emphasizes the experiencing self
    • As it turns out, most people are happy, regardless of their circumstances
    • So you can't use happiness as a marker of privilege
  • The remembering self is different
    • Although happiness saturates with income after about $75,000 in the US, "life satisfaction" does not
    • Independent of happiness, wealth buys freedom from routine
    • And just as wealth can be spent on ways to acquire new experiences, it can also be spent on ways to express those experiences in novel ways
    • Upper class people are described as "cultured" because they know a lot of culture, and can describe their experiences by referencing cultural artifacts in ways that uncultured people cannot
  • If the two parts of our lives are pleasure and being understood, then increasing wealth quickly saturates the former, but may never saturate the latter
  • This means that money is good – money buys freedom
  • Absolute amounts of money, not just relative amounts matter, because money buys things and experiences that our remembering self can look back on as signifying a meaningful life
    • Is this true, though? Doesn't inflation eat away at much of this?
    • At the very least the GDP figures he's talking about have to be inflation-adjusted GDP, otherwise the whole exercise is meaningless
  • Belonging to the dominant race and sex grants the same sort of privilege as wealth, but by a different mechanism
    • Wealth makes it easy for you to speak the cultural language
    • Being of the same race, gender, sexual orientation, etc, makes it easier for others to listen
  • This ties into stereotypes
    • Stereotypes are necessary to function
    • If we didn't rely on stereotypes, we'd all be borderline autistic in our everyday interactions
  • Most racists are actually culturists – they don't hate people because of their race, they hate people of a certain race because that race is associated with certain cultural characteristics
    • Race and gender are social constructs, but the characteristics that correlate with race and gender are real
    • Until we can learn to speak pheromone, all our interactions will be mediated by stereotypes
    • This is why the standard "don't be racist" framing fails – you can't tell people to get rid of a stereotype without replacing it with a more benign stereotype
  • Stereotypes and microaggressions (which are behaviors created from stereotypes) lead to harm, even they're positive
  • Cause harm against the remembering self even if they don't cause harm against the experiencing self

Part VI

  • If globalization is the primary phenomenon of the 21st century, then immigration will be the defining political conflict
  • Immigration used in a broad way – talking about immigration of both people and ideas
  • Stereotyping applies just as much to low-class whites as it does to other minorities
    • The "y'all" class
  • At the core of the rebellion that led to Trump was a need for respect – an acknowledgement that they are also human beings struggling for their values
    • To be honest, in the US, this conflict has been going on since at least the Civil War
    • The distinction between the South and the North in the Civil War was that the North ended up fighting a war for principle, and the South ended fighting a war for tradition
    • So I don't think this phenomenon is as new as he's claiming it to be
  • When conservatives talk about "the Gay Agenda", they're not talking about a conspiracy of gay people to force people into homosexual relationships
  • They're talking about the rhetoric that puts them on the "wrong side of history" – rhetoric that claims that in a few years their objections will be irrelevant because they'll have been steamrolled by the grand tide of social progress
    • For an example of this, look at Conservatives As Moral Mutants
    • How much of this is the conservatives actual argument, and how much is the "obnoxious kind of steelmanning" that Ozy talks about in their post, Against Steelmanning?
    • I'm asking because I've seen this argument posed elsewhere – people whose (political/cultural/social/etc) ideas I disagree are really asking for respect! I have not once seen it posed by a member of the group that is being called out. In my experience, it's always been a way for apologists for the group that is being marginalized (for good or bad reasons) to oppose the marginalizaiton of that group
    • To put it crudely, I will be convinced that conservative opposition to the "gay agenda" is "really about respect" when I hear it coming from someone wearing a MAGA hat
  • This is especially relevant for the (white) working class, because in order to advance socially, they have to impress managers
  • So for them, changing views on social issues are a direct threat to their economic livelihood
    • Eh, again, I'm wondering how much of this is true
    • Lots of liberals hire workers whose values they don't share
    • I think he's falling into the Bay Area trap of thinking that the world is more politicized than it actually is
  • The specific way that the media has talked about race and culture has made the problem worse
    • Created a set of vague rules around "appropriation" and etiquette
    • Loudly proclaimed that immigration is the end of white America
  • The point is that the left wants to prevent assimilation

Part VII

  • The easiest way to write "minority" characters is to make "a-racial" characters, and then use minority characters to play them
    • Prevents potentially embarrassing misunderstandings
    • Allows the minority characters to be legible to a mainstream audience
  • A better way to get minority characters is to hire minority writers
  • But the cultural criteria used to judge whether those minority writers are good is same cultural criteria used to judge whether white writers are good
  • This is how the upper-middle-class protects itself from change
  • They concede that minorities will immigrate, but set the conditions of social advancement such that one has to adopt an upper-middle-class value system, mindset and mannerisms in order to advance economically and socially
  • Affirmative action is good, but it tends to promote exactly the sorts of people who will perpetuate the upper-middle-class mindset
    • Half of African Americans oppose gay marriage, but you'll never see the upper-middle-class talking about that
    • Except you totally will; you just have to look for it
  • The debates about whose opinions should be heard at universities misses the point
  • No matter who wins that debate, the university and the people who go to universities win because it reinforces the idea that the only valid place to form an opinion is a university
  • Everyone understands that class is hereditary, but the strongest critiques of the the hereditary class system comes from those who benefit it the most
  • The old aristocracy has transmuted itself into the "media elite"
    • Aristocrats used to be in the "service industry"
      • Wait, what? He's defining "service industry" as "any industry where the customer is right" and then using "writer, therapist, barber, sales" as his examples
      • Are these all service-industry jobs? Or is he redefining words to connote that being courtier to a king is somehow analogous to working the sales floor at Best Buy
    • Once the industrial revolution erodes the old social classes, these aristocrats form a "meta-service industry", which tells other people what is correct or incorrect to read or write
  • This translates into schools, which are more concerned with teaching the correct answers to allow people to advance into the upper-middle-class than with teach practical skills such as how to cook or balance a checkbook
  • Even if this doesn't translate into people reaching the upper-middle class, the main criterion for employment in the service industry is the ability to be inoffensive
  • In this way, maybe schools are doing exactly what they ought to be doing – training workers for jobs
  • /If he is correct that the majority of jobs are going to be in the "service industry", then perhaps teaching kids to be inoffensive is exactly the sort of things that schools ought to be doing

Part VIII

  • The neoreactionaries think that we are living in the end times
  • Think that democracy is a memetic virus that's tearing its way through civilization, and that civilization will soon fall
  • But the neoconservatives are wrong
  • This sort of coming apart is what the Tower of Babel is an allegory for
  • Civilization always comes together, and then it always comes apart
  • There is no way to impose one culture, one value system on all of humanity
  • There will always be people whose values are denied by that system, and they will always rebel
  • The neoreactionaries ideal homogenous nation would be no different, day to day, than the United States
    • Or, at least, it wouldn't feel much different, day-to-day
  • Moreover, real diversity has benefits
  • The US has contributed far more to the world than homogenous ethno-states
  • That contribution has occurred because of the US's diversity, not in spite of it
  • However, just because this process of falling apart and coming together has occurred in the past, it doesn't mean that it will be pleasant to live through

2018-07-23 Notes

Neurons Gone Wild

(Previously discussed on 2017-01-02)

  • Can we come up with a scientific explanation for religious experiences?
  • Neurons, selfish and feral
    • "Selfish" neurons
    • Neurons are in a state of competition for resources
    • Mental activity feeds neurons
    • This competition is the key behind neuroplasticity - neurons actively join more active networks in order to gain resources
  • Agents all the way down
    • Agent - any entity capable of autonomous goal-directed behavior
    • Agency is a matter of degrees
    • Agency is not inherent to the system, but is ascribed to the system by post-hoc analysis
    • Agency is a fundamental property of the brain
    • Because neurons have a higher level of agency than other cells, the brain is configured to run agents by default
  • Level 2: Modules
    • We can describe the brain at a slightly higher level of abstraction as hundreds or thousands of cooperating and competing modules
    • According to Dennett and Seung, these modules have the same sort of selfishness as the neurons they're made from
  • Level 3: Subpersonal Agents
    • Drives/instincts
    • Can "feel" these agents via introspection
    • These agents aren't capable of using language, but we still speak of them "telling" us things
  • Level 4: The self
    • Social agent
    • Not in control, but is the "voice" of the most powerful faction in our mind
  • Birth Defects in the Self
    • Is the human mind capable of supporting multiple self-agents?
  • Multiple occupancy
    • There are multiple psychological disorders where there appear to be multiple agents in the brain
    • Schizophrenia - hallucinated voices
    • Disassociative identity disorder - 2 or more person-like agents in the same brain
    • Posession trances - "gods" who temporarily inhabit minds
    • Split-brain - when communication between parts of the brain is severed, each half acts like its own agent
    • Could these agents be independently sentient?
  • Agent horticulture
    • Tulpas - trying to create additional agents
  • Taking demons seriously
    • What is the psychological or anthropological explanation for demon possession and exorcism?
    • Can we think of curing possession as changing the agent that's in control of the brain?
    • The exorcist is a person with the moral authority to negotiate with the currently dominant agent and persuade them to relinquish control
  • My Thoughts
    • This is probably the most charitable interpretation of tulpas I've read
    • Naturalistic explanation for tulpas and other religious/demonological experiences
    • Still doesn't prove that tulpas are a good thing or that you should want to create additional agents in the brain

Highly Advanced Tulpamancy 101 For Beginners

  • The brain is probably one of the most complex things that we know of
  • We don't model ourselves as complex chaotic systems of firing neurons, we model ourselves as entities that have a discrete existence in the world
  • Our model of ourselves has the self as a distinct entity
    • This entity, like all other entities in the world, has various attributes
    • However, unlike other entities in the world, the self is a far less well-defined category than other categories we're used to
    • Can put almost anything as an attribute of the self
      • Meyers-Briggs personality type
      • Physical characteristics
      • Neurotype
  • This broad conception of the self is adaptive - allows us to feel like an attack on those similar to us is an attack upon us
  • This broad conception of the self can also be maladaptive
    • If you are given a label, you can come to associate attributes associated with the label with yourself, even when there is no reason to do so
    • Example: if you associate "depression" with laziness, then getting a diagnosis of depression can cause you to think of yourself as lazy, even when there is no other reason for you to do so
  • An extensional definition of the self:
    • Experience of perception - our senses provide an extremely high fidelity feed of information about the outside world (within certain limits)
    • Experience of internal mind voice - many people experience an internal monologue or dialogue
    • Experience of emotion - most people experience emotions, although the strength of this experience varies from person to person
    • Experience of the body - various signals about our bodily state (pain, hunger, etc.) - distinct from experience of perception, since perception is about things outside the body
    • Experience of abstract thought - mental images, imagined scenes, mathematical calculation, etc
    • Experience of memory - the experience of calling up past memories
    • Experience of choice - the experience of having control over our lives
  • This extensional definition is incomplete
  • People also define themselves intensionally - with various attributes that they've chosen for themselves
    • Religion
    • Nationality
    • Age class
  • This self-definiton creates a self-schema - a collection memories, attitudes, demeanors, generalizations, etc that defines how the person views themselves and interacts with the world
  • People can have multiple self-schemas for various situations
    • I might think of myself as an engineer primarily at work, and "the dude who does notes" when I'm at RRG - these are different schemas
  • A tulpa is a highly partitioned and developed self-schema that is "always on"
  • More specifically a tulpa is: "an autonomous entity existing within the brain of a 'host'. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world.'
    • That's not the same thing as a self-schema and claiming that it is, is a form of motte-and-bailey
  • The challenge with creating a tulpa is to extend the intensional-definition split of having multiple self-schemas out into the extensional definition of the self
    • Have different senses of perception, internal mind-voice, etc. when you're in a different self-schema
    • Yeah, this is dangerous; so the first rule is don't do this
  • Changing our extensional definition of the self is far more dangerous than changing our intensional definition, but maybe we can compartmentalize our changes
    • We can't. Nothing that I've read about rationalists dealing with tulpas or narratives in their daily lives has convinced me otherwise
  • How to tulpa
    • First learn how to build a mental compartment
      • Pick an idea
      • Let go of all the system-2 safeguards that you have protecting you against bias when dealing with that idea
      • Ignore all counterevidence
      • If the idea forms a successful prediction, keep the warm glow of success in the compartment
      • If the idea fails, keep the failure outside and don't let the compartment update
    • Pick the beliefs that you want to have, and sort them into compartments
      • If two beliefs would interact destructively, put them in separate compartments and don't let them interact
    • Regulate information intake, deciding which compartment each new piece of information should go into
      • Normally this is done unconsciously, but you'll have to have to have some conscious control since you're attempting to build up multiple selves, rather than having a single unitary self
    • Once the sorting becomes automatic, then you'll end up with multiple categories, each with radically different beliefs about the world
      • These categories are tulpas
  • Failure mode - tulpa doesn't seem to be "talking back"
    • You need to put beliefs and experiences into your tulpa in order for it to do things
    • If you have to ask whether your tulpa is working, it isn't working
    • Your self of sense is how your mental algorithms feel from the inside - if you feel the same, you haven't managed to change your mental algorithms
  • The real trick with creating a tulpa is somehow seeing yourself as multiple entities, not a singleton
  • My thoughts
    • Did I just read a guide on how to make yourself go insane?
    • Sometimes I think Hive is normal, and then I read stuff like this
    • And Hive doesn't answer the real question: why would anyone want to inflict this upon themselves?

Highly Advanced Tulpamancy 201 For Tropers

  • In the previous essay, we saw how we can alter the process the brain uses to create a self and use it to create multiple selves
  • Further questions
    • What does it mean to give up control of the body/senses?
    • Is the new entity a separate person?
    • What happens if there's a power struggle?
    • How can you do this without permanently damaging yourself?
      • Narrator: "You can't."
  • Start with the question of how you create identity, in the normal case:
    • We take in or discard information all the time based upon our worldview
    • Many parts of our identity are defined by how we say they're defined
      • Narrative self
      • You are defined by the actions that you allow yourself to take
      • This can be conceptualized by thinking of your life as a story, and seeing what actions are available to you as the main character in that story
      • Our beliefs strongly determine whether we can do something
        • Not to an infinte extent - if I believe that imaginary energy can make me fly, it won't stop me from falling to my death
        • But if I think I can't fly, then I'm going to be much less likely to take up hobbies like hang-gliding or piloting
        • The tulpa community is full of things that only happen when you believe that you can do them
        • There appear to be large parts of the mind that can be shaped by how you believe they're supposed to be shaped
    • Our minds, normally, don't model the world as it is, they model the world in narrative terms
      • Unless we're specifically investigating a phenomenon, we don't explicitly think of the world in terms of particles and fields
      • We think of the world in terms of higher-level objects
      • At human scales, it doesn't often matter whether gravity is the action of a god or a distorition in spacetime -- only matters when you're engineering things outside of ancestral human experience
  • Chuunibyou Hosts On Turbo Gender
    • At some point, everyone realizes that they're an independent person, capable of making their own choices
    • When children realize this, they often start acting out or being weird
    • Usually what happens is that under social pressure, they rein in their weirdness and end up as a normal well-adjusted person
    • However, this moment is interesting - people understand that they can change their identity, and realize that they have a choice in what their identity is
      • How much of this is cultural? Would someone from the Middle Ages have thought of themselves in this way?
  • Plato's Caving Adventure
    • In Plato's cave analogy, you don't perceive the real world directly -- you live in a cave and watch the "shadows" that the real world makes upon your senses
    • At this point, you can do one of two things
      • Polish your "cave wall" to perceive the real world more clearly
      • "Carve patterns" onto the wall in order to manipulate how the shadows dance
    • Almost everything about your "inner self" is decided by you
      • No way to tell from the outside what your inner self feels like
      • Not really decided by you, but decided by the "plot" of the "narrative" that you're living through
  • A Brief Detour Through Enlightenment
    • Cognitive fusion
      • Person becoming fused with the content of a thought or emotion to the point where the thought or emotion is experienced as an objective fact about the world
      • Not all cognitive fusion is bad - if we sense something dangerous, being fused with that thought allows us to take actions to get ourselves out of danger quickly and effectively
    • The Buddhists think of the mind as a set of interacting subagents
    • There is a narrative agent, the 'I' which takes the output of all of these subagents and knits them together into a coherent sense of self
    • We are cognitively fused with this agent, and it is possible to unfuse ourselves, and see the process that creates a sense of self as just one agent among others
    • Once you have done this, you can alter the story that you're telling about yourself, and even tell stories that require more than one character
  • A Return to Cognitive Trope Therapy
    • You can make your life a lot more pleasant just by knowing the correct narrative spin to put on things
    • Pleasant, but dangerously false - reality doesn't conform to tropes; truth is stranger and much more unpredictable than fiction
    • It's awfully convenient how all of these stories have yourself as the good person -- does anyone tell a story about themselves where, in the end, they were in the wrong and deserved to be defeated?
    • All of these techniques involve treating the agents in your mind as characters in a story
    • However, when you do so, you need to make sure that there is some part of your old consciousness that retains control over the story
    • Otherwise, the story plays out according to whatever subconscious genre conventions you've picked up
    • Stories can give us a sense of purpose and meaning, by stimulating our emotions more strongly than reality
      • And this is a good thing?
  • Storytelling, Character Creation, and GM-ing your life
    • The first thing to decide when turning your life into a narrative is to decide what genre you live in
    • Genre defines which tropes you're defined by
    • Your internal narrative can be as weird as you want it to be, as long as it produces good outcomes on the outside
    • However, you need to be aware of the effect that your narrative is having on you - unhealthy narratives can be just as self-reinforcing as healthy narratives
  • At the end of the day, everything you do is a performance, to some extent, even if the only audience is yourself
  • So take control and choose what kind of character you want to be, instead of letting it happen via unconscious processes
  • My Thoughts
    • The problem here is that people only consume stories that have happy endings
    • Yet, in the real world, happy endings, at least in the sense that stories have happy endings, are few and far between
    • Usually, outcomes are nuanced, with a mix of success and failure
    • The problem with treating your life as a narrative is that either you:
      • Ignore the failures
      • Have a constant lack of peace, as no outcome ever seems to match the fairy-tale outcome you envisioned for yourself
    • Hive seems to be advocating a policy of ignoring or compartmentalizing failure -- I think this is very dangerous
    • There are so many people who got into trouble because they couldn't handle the thought that they failed, and as a result, kept doubling down on losing strategies
    • This essay trips my "motte-and-bailey" alarm. On the one hand, yes, the stuff about taking control of how you see yourself is a basic tactic of psychotherapy. On the other hand, I don't think any psychotherapist would say that it's okay to split your personality, or to create additional personalities and narrative or compartmentalize to the extent that I see being advocated here

The People In My Head Who Make Me Do Things

(Previously discussed on October 30, 2017)

  • It can be helpful to cluster your motivations and assign a persona to each cluster
  • Recognize that each of your motivations has a role and purpose
  • Might be helpful to be more explicit about giving different parts of yourself a chance to be at the forefront

2018-07-16 RRG Notes

Cognitive Biases Potentially Affecting judgment of Global Risks

Introduction

  • Most people would prefer not to destroy the world citation needed
  • Even evil villains generally prefer to have a world in order to perform their evil in
  • As a result, if the earth is destroyed, it will probably be a result of a mistake rather than an intentional act
  • In order to minimize mistakes, we should study heuristics and biases to see how they could lead us into a situation in which we inadvertently destroy the earth or ourselves

Availability

  • Are there more words which start with the letter r or more words with r as the third letter?
    • When most people are asked, they say that there are more words which start with the letter r
    • However, in reality, the reverse is the case
    • People guess that there are more words which start with r because they have an easier time recalling words which start with r than words which have r as the third letter
  • This availability bias means that people systematically mis-estimate the likelihood of various risks, because they estimate based upon how often they've heard about the risk rather than how often the risk actually occurs
    • Death from stomach cancer is far more likely than death from homicide but people's risk estimations are the reverse -- homicides are reported in the news, whereas stomach cancers are not
    • People don't buy flood insurance even when it is priced artificially cheaply because they think about the worst flood that they have experienced, rather than the worst flood that has occurred
      • Building dams reduces the frequency of flooding, which means that people take fewer precautions
      • As a result, when flooding does occur, it is far more destructive
      • On net, building dams may make flooding more economically damaging, not less
  • Societies well protected against minor hazards don't protect against major hazards
  • Societies vulnerable to minor hazards use the minor hazard as an upper-bound on risk

Hindsight Bias

  • People routinely say that things are more predictable in hindsight than they actually are
  • Two groups of citizens were given a hypothetical scenario in which a city did not hire a bridge watchman at a drawbridge
    • In one group, the citizens were only given the data that the city had when it chose to make the decision to not hire a bridge watchman
    • In the other group, the citizens were given all of the data, plus the fact that a flood had occurred due to a blockage at the drawbridge
    • The second group was significantly more likely to hold the city liable for negligence, even after they had been told to avoid hindsight bias -- debiasing attempts were not effective
  • People, in hindsight, look at the cost of dealing with the one risk that actually occurred, not the cost of dealing with all the risks at that level of probability that could have occurred

Black Swans

  • Nassim Taleb suggests that availability bias and hindsight bias combine to yield a vulnerability to black swans
  • A "black swan" process is a process in which most of the variation in a process comes from random, hard-to-forecast, low-probability events
  • Example: a financial strategy that earns $10 returns at 98% probability but suffers $1000 losses at 2% probability
    • Most years this financial strategy will look to be a sure winner
    • However, when there's a bad year, the losses are more than enough to wipe out all the gains from the good years
    • Not a hypothetical scenario -- trader had a strategy that worked without fail for 6 years, yielding profits of close to $80 million, but then was wiped out with a $300 million loss in the seventh year
    • Long Term Capital Management lost $100 million per day during the Asian financial crisis of 1997
    • LTCM said that the event that occurred was a 10-sigma event, but that's obviously not true -- a ten sigma event is so unlikely that even if the universe was 10 times as old as it were today, the event still should not have occurred
    • It's far more likely that LTCM's market models were wrong and underestimated the risk of the markets behaving in this manner
  • Hindsight bias predisposes us to think that because the past is predictable, the future is also predictable
  • Hindsight bias predisposes us to learn overly specific lessons about the past
  • Moreover the prevention of black swans is not easily seen or rewarded

The Conjunction Fallacy

  • The conjunction rule of probability states that P(A and B) is always less than or equal to either P(A) or P(B)
  • However, adding details to a story makes the story seem more believable, even though the probability of every single one of the details being true decreases
  • Example: Linda the bank teller
    • Linda is a hypothetical person who majored in philosophy is interested in feminist causes and and social justice.
    • Is it more likely that Linda is a bank teller or that Linda is a bank teller who is active in the feminist movement?
    • People pick the latter statement as being more likely, even though it is mathematically less likely
    • The statement with more detail paints a clearer picture, even though it's more likely to be false
  • People choose to bet on longer sequences of dice rolls than shorter ones, even though any given sequence of 4 dice rolls is more probable than any given sequence of 5 dice rolls
  • We substitute in a notion of "representativeness" for a calculation of probability
  • People will pay more to defend against a nanotechnological attack from China than they will to defend against a nanotechnological attack in general
  • Vivid, specific scenarios can inflate our sense of security
  • People tend to overestimate conjunctive probabilities and underestimate disjunctive probabilities
    • People will overestimate the probability of 7 events with 90% probability all occurring
    • People will underestimate the probability of at least 1 of 7 events, each with 10% probability, occurring

Confirmation Bias

  • People try to confirm hypotheses rather than try to disprove them
  • Example: 2-4-6 task
    • Experimenter announces a sequence of integers that fit a particular rule
    • Subject has to guess the rule
    • Even though subjects expressed high confidence in their guesses, only 21% of them actually guessed the rule correctly
  • Confirmation bias comes in two forms, "cold" and "hot
    • Cold form is the 2-4-6 example above - emotionally neutral
    • Hot form is with emotionally charged arguments, like in politics
    • Hot confirmation bias is more resistant to change
      • People easily accept arguments for what they already believe and subject counterarguments to more scrutiny
      • Two biased observers viewing the same stream of evidence can update in opposite directions, as they selectively choose data which already confirms their pre-existing beliefs
  • We must apply our knowledge of heuristics and biases evenhandedly -- apply it to arguments that we we accept as well as arguments that we disagree with
    • Personal example: plastics in the ocean. I keep hearing that microscopic plastic particles are terrible, but what are their actual effects on marine life?
    • This is relevant because we're using this logic to get rid of things like plastic utensils and plastic straws
  • People decide what they believe far more swiftly than they realize
  • If you can guess what your answer will be, then you already know what your answer will be, to a high degree of confidence
  • It's not a true crisis of faith unless things could legitimately go either way

Anchoring, Adjustment and Contamination

  • People anchor estimates to data that they've just received, even when that data is completely unrelated to the task at hand
  • Example: people were asked to guess the number of countries in the United Nations, after watching a wheel of fortune yield a random number
    • People who saw the number "65" come up guessed higher than people who saw the number "15
    • Payoffs for accuracy did not change the magnitude of the effect
  • People started with the anchoring point, and then adjusted their estimate up or down until it seemed reasonable, then they stopped adjusting
  • The generalized form of anchoring is contamination
  • Almost any prior information given can contaminate a judgment
  • Manipulations to try to offset contamination are largely ineffective
  • Placing the subject in a "cognitively busy" environment seems to increase contamination effects
  • People will also consistently say that they were not affected by the anchor, even though the experimental evidence showed that they were

The Affect Heuristic

  • Subjective impressions of "goodness" or "badness" can produce fast judgments
  • People's subjective assessments of the "goodness" or "badness" of a technology colors their assessment of the possible risks of that technology
  • Providing information that increases perception of benefit, decreases perception of risk, and vice versa
  • This effect is magnified by sparse information, which is particularly troubling for the evaluation of future technology
  • More powerful technologies may be rated as less risky if they can also promise great benefits
    • Biotechnology can create cures for disease, but it can also lead to more potent biological weapons
    • Nanotechnology can create new materials and faster computers, but it can lead to new forms of pollution and "gray-goo" scenarios

Scope Neglect

  • People are willing to pay only a little bit more in order to have a much larger benefit
  • Example:
    • The median price that people are willing to pay to save 20,000 birds is $78
    • The median price that people are willing to pay to save 200,000 birds is $88
  • Possible explanations include:
    • Affect heuristic, combined with availability bias - people reach for a prototypical example of the benefit they're trying to achieve and put their stated willingness to pay based upon how that single example makes them feel
    • People are choosing to buy a certain amount of moral satisfaction, and pay based upon the moral satisfaction they feel rather than the amount of good being done
    • People pay based upon how much money they have mentally allocated to the cause area
  • Scope neglect applies to both human and animal lives

Calibration and Overconfidence

  • People are wildly overconfident in their estimates
  • Even when asked to give 98% confidence intervals, the true value was outside their confidence interval 42.6% of the time
  • In other words, people assigned a 2% probability to events that occurred more than 42% of the time
  • Letting people know about calibration makes them better calibrate, but still not calibrated well -- after calibration training, the true value lay outside of people's 98% confidence interval 19% of the time
  • People don't realize how wide a range they need in order to have high confidence
  • This especially applies to planning
    • Planning fallacy
    • People were asked to estimate the delivery date of their honors thesis
    • On average people missed their average case estimate by 22 days and their worst-case estimate by 7 days
    • Only 45% of students managed to finish their thesis by their 99% probability interval date
    • Reality usually delivers results that are somewhat worse than the worst case
  • Calibration and overconfidence is another bias accusation that has to be applied especially evenhandedly

Bystander Apathy

  • People are far less likely to act when they're in a group than when they're on their own
  • When people are in a situation that is ambiguously an emergency they look around for social evidence on how to react
  • However, everyone else in such a situation is also looking for social proof
  • As a result, people's natural instinct to look calm and unruffled kicks in and no one acts unless there is unambiguous evidence that the situation is an emergency
  • This is the answer to the question, "If existential risk X is a real threat, why aren't more people doing something about it?"
    • It's unambiguously clear whether X is a real threat or not
    • As a result, there isn't enough social proof to justify publicly acting on X

A Final Caution

  • Every true idea that discomforts you will tend to match the pattern of at least one psychological error
  • We care about cognitive biases and psychological errors only insofar as they result in factual errors
  • If there are no factual errors, then what do we care about the psychology?
  • Before you say why someone is wrong, you must prove that they are wrong

Conclusion

  • We need to have an organized body of thinking about existential risks not because the risks are all similar, but because we have similar flaws in how we think about them
  • Skilled practitioners in a field will not automatically know of the existential risks their field generates
  • Right now, most people stumble across the knowledge of heuristics and biases accidentally -- we should formalize and spread this knowledge so that more people outside of psychology know of these results
  • Thinking about existential risk falls prey to the same biases and heuristics that we use for all of our thinking, only the stakes are much higher
  • When thinking about the fate of all humanity, people use non-extensional reasoning -- imagine that "humanity" is a separate thing, and that the destruction of humanity doesn't imply the destruction of all they hold dear

2018-07-09 RRG Notes

We Need To Sing About Mental Health

  • Every so often, there's a post on social media, arguing that we need to end the stigma on mental health
  • Is anyone actually in favor of there being a stigma on mental health these days?
  • If most people are against stigma, why does it still exist?
  • Why is there such a huge gender difference in the people willing to talk about mental health publicly?
    • People who post publicly about mental health tend to be women
    • People who post anonymously about mental health tend to be men
  • People who write about the need to talk about mental health online rarely say what talking about mental health actually looks like
  • Even though depression has a chemical basis, the trigger is often environmental
    • People who are secure in their identities don't feel depressed
  • The problem with most videos about mental health is that they
    • Show the speaker as confident, attractive, "not your typical mental-health patient"
    • Suggest the speaker is heroic just for talking about mental illness
    • Ignore any details about the speaker's life that may have contributed to the mental illness
    • Do not give any actionable advice
      • No advice on treatment
      • No advice on recovery
  • In practice, there's a huge difference between the sanitized picture of mental health issues that we get from inspirational videos and the actual ugly reality
  • The problem with mental health awareness campaigns is that they can make it okay to talk about mental health, but they can't erase the fact that mental health is perceived as weakness
  • People compare mental health to physical ailments, but they compare mental health to the wrong physical ailments
    • The correct point of comparison is chronic diseases
    • The same "stigma" surrounds e.g. irritable bowel syndrome or celiac disease
    • This stigma is present around anything that makes you look weak
  • How does mental illness suck
    • Sadness can be enjoyable, in its own way
    • But mental illness is not sadness
    • Mental illness is pain
      • Depression feels like you have the flu, complete with aches and muscle weakness
      • Anxiety feels like you're having a heart attack, over and over again
    • The first step to alleviating pain is communicating it
    • However, there is no good way of communicating pain reliably
    • There's also a strong incentive to "cheat" and to exaggerate the amount of pain that you're in
    • Moreover, pain itself isn't a strong signal of the severity of the underlying condition
      • People with gastrointestinal bleeds only notice they get a bit dizzy when they stand up
      • Pulmonary embolisms may be asymptomatic
    • Pain is part of the map, not the territory, which is why doctors ignore pain whenever possible and treat according to the severity of the underlying condition
    • In this sense, people with mental illness are similar to people with chronic pain, and unsurprisingly, there's a strong correlation between chronic pain and mental illness
    • They both face the problem that even if people try to listen, they won't get it - there's no way to empathize with chronic pain or mental illness if you haven't also suffered chronic pain or mental illness
    • Talking about mental illness and pain alleviates symptoms, but it doesn't address the cause
  • Drug addiction
    • People try to draw a difference between drug dependence and drug addiction
      • Dependence - need drugs to function; spiral of tolerance and increasing doses
      • Addiction - uncontrollable cravings; compulsive use
    • These people don't understand how addiction works - addiction is dependence
    • Drug addicts don't take drugs to get high, they take drugs to make the pain of withdrawal stop
    • The problem with using an opioid painkillers for chronic pain is that it's a time bomb - addiction is not a matter of if, but when, as tolerance builds
    • Eventually you'll need massive doses of painkillers just to reach normalcy, and at that point there's no distinction between dependence and addiction
  • So what does this have to do with mental health and talking about mental health?
    • Talking about mental health feels good the first time you do it
      • People are sympathetic
      • You feel like you've unburdened yourself to some extent
    • The need to seek empathy is an understandable reflex, but it only address symptoms, not causes
    • The difference between publicly proclaiming suicidality and complaining to a friend about work is a difference of degree, not a qualitative difference
    • How does this tolerance build
      • Depressed person feels more depressed than usual
      • Vents to a friend or family member
      • Depressed person feels better
      • This cycle is repeated, but that rush of relief is less and less each time - depressed person thinks that other people just don't get it
      • So the depressed person escalates their behavior a little bit - maybe screams and cries instead of talking
      • This escalation spiral continues until, in the end state, you have people broadcasting suicidal thoughts to an audience of thousands and still not feeling any relief
      • Eventually the person can't communicate their depression at all, because everything they do is interpreted as attention-seeking drama
    • It is a myth that expressing emotion is an force for good
    • Expressing emotion is better thought of as a painkilling drug
      • Effective for acute cases
      • Long term use leads to dependence
    • Inpatient psychiatric hospitalization is the strongest form of this drug - the rate of suicide is highest for people the first week after discharge - which corresponds to a model that sees the removal of the inpatient support as withdrawal
    • While it is not possible to be neurotypical, not all neurodivergence is neutral - some of it is unambiguously bad
    • This is why it's toxic to make your neurodivergence part of your identity a.k.a. speaking about mental disabilities rather than mental illnesses
    • Illnesses are treatable, disabilities are not
    • Once you become the sort of person who needs accomodation, it becomes harder and harder to stop
    • Example: trigger warnings
      • If you tell yourself you need trigger warnings, you've given every bully out there an easy way to hurt you
      • You've made yourself weak for nothing in return
    • Suffering doesn't guarantee a payoff - some people spend their entire lives miserable, and then they die
  • So what can we do about this
    • First let's model depression not as an adaptation, but as a bug resulting from the interaction of other adaptations
    • Results from a series of negative events which trigger our instinct to find causes
    • Think that negative events were your own fault, and therefore things you do are now doomed to fail
    • The thought loop is the cause of the depression, not the initial trauma
    • So how do we deal with the thought loops
    • Anti-depressants
      • Address the physical symptoms associated with the thought loops, not the loops themselves
      • But if you're not constantly dealing with physical symptoms, then it can be easier to deal with the thought loops
    • Hallucinogens
      • Drug trips reset the way the brain thinks, so for a while, you don't fall into the familiar thought loops
      • However, these too have diminishing returns
    • Behavioral therapy
      • If perfectly executed, behavioral therapy can work
      • However, the execution is hard
      • Failure can reinforce the depressed person's belief that they suck at everything
    • The thing that seems to work to help with depression is just doing things
    • As long as you can keep up a certain sense of momentum, you'll break out of the depressive thought loops eventually
    • The act of doing things allows you to redefine your identity, which lets you see yourself as being defined by something other than your mental illness
  • My thoughts
    • This is a good essay, but it seems a big glib
    • The metaphor of people being addicted to empathy sound good, but I'm wondering whether it's predictive, or whether it's just based upon surface similarities
    • This pieces is very long for what it's trying to say, like most neoreactionary writing

2018-07-02 RRG Notes

Psycho-conservatism: What it is, When To Doubt It

  • A large number of right-of-center thinkers have come around to a particular unnamed consensus
  • Consensus between Jonathan Haidt, Jordan Peterson and Geoffrey Miller
    • Do these thinkers actually represent a "large number" of people? It seems to me like they represent a small number of people, but have high visibility among rationalists, because their ideas are tailored towards being attractive to rationalists
    • The fact that "all of a sudden it seemed like everyone you knew who was a 'centrist' or 'conservative' was quoting Peterson and Haidt is just a testament to the strength of your bubble, not to the inherent popularity of these people's ideas
    • I disagree that Haidt and Peterson are even that popular/influential in the "culture and politics" fandom. Heck, I bet that Alex Jones is more popular than they are
    • On the other hand, they might be influential -- after all how many have heard of Evola or Gramsci? Yet, they're cited by alt-right figures as having inspired their tactics and their thinking
    • Are Peterson and Haidt this generation's Gramsci and Evola?
  • The consensus between these three thinkers is psycho-conservatism because they're all psychology professors
  • Psycho-conservatism is about human nature
    • Humans have a given, observable nature
    • This nature isn't always pretty
    • Human nature places limits on what we can do with culture and society
    • Traditional wisdom is often a good fit for human nature
    • Utopian changes fail because they run contrary to human nature
  • Small-'c' conservatism
    • Look to the past for inspiration
    • Be skeptical of radical changes
  • Methodologically skeptical
    • In light of the replication crisis in the social sciences, these professors are skeptical of existing evidence
    • Draw conclusions from the most replicated and difficult to misinterpret experimental findings
      • IQ
      • Behavioral genetics
      • Heritable stable phenomena like Big-5 personality dimensions
  • Share a distinct set of political and cultural concerns
    • Modern culture doesn't meet people's psychological needs
    • Sympathy for values like authority, tradition and loyalty
    • Belief that science on IQ and evolution is being suppressed in favor of science on more egalitarian theories, which are less correct
    • Belief that illiberal left-wing social activism is a major problem
    • Disagreement with contemporary feminism, anti-racist activism and LGBT activism
    • Disapproval of the "culture of victimhood" -- better to be "sunny and persuasive" than aggrieved
    • No association with the current Republican Party
    • Moderate or silent on "traditional" controversies, like abortion, war, government spending, etc.
    • Interested in building more national or cultural unity, as opposed to polarization
  • What are the weaknesses in psycho-conservatism?
    • Sometimes we actually do know what we're talking about
      • Highly skeptical methodology ensures that you won't be totally wrong, but there might be exceptions that your methods don't see
        • The airplane analogy is awful. In fact, the Wright Brothers managed to build their airplane by doing exactly what she argues against -- they built their own wind tunnel, and gathered their own aerodynamic data using the "strongest empirical principles"
        • In fact, the Wright Brothers faced a replication crisis of their own -- their initial prototypes were all failures, which led them to question the accuracy of the "conventional wisdom" aerodynamic data they'd picked up from books
        • Similarly, for society, we should insist on actually knowing how people and society work (i.e. gathering our own wind-tunnel data) before making attempts to engineer better outcomes. Otherwise we'll crash over and over again and we won't know why
        • Might be ways of engineering society in order to ensure that better outcomes, even though in general attempts to engineer society fail
          • I don't think Haidt and Peterson would disagree with this. In fact, they've argued for social engineering, in order to offset the psychologically damaging aspects of modern society
          • But what Haidt and Peterson do say is that social engineering must be done in an empirical manner -- you can't just disregard human nature, or pretend that e.g. the only reason men and women have different interests is because of "the patriarchy"
        • There might be biological interventions that allow us to do better than default human nature
          • Sure there are! It's just that if you mentioned "free genehacking clinics for all" the default left-liberal reaction would be horror rather than, "Yes, let's make this happen as soon as possible
      • There will always be a minority of people that your "conventional wisdom" doesn't apply to -- you shouldn't override those people's local knowledge with your general principles of society
        • Default ancestral society was terrible for nonconformists -- modern society is much better for them
        • By advocating a return to ancestral society, Peterson and Haidt are advocating a return to oppression for people who cannot or will not conform to ancestral norms
    • Sometimes psycho-conservatives get the facts wrong
      • Not all "cultural universals" are actually universal
        • Example: patriarchy is a result of a particular type of agricultural civilization, not a human universal
    • When a principle is at stake
      • The fact that it is in our nature to do something doesn't mean that we should do it -- naturalistic fallacy
      • Yes, certain societal structures and behaviors may be easier to sustain if they're in alignment with human nature
      • However, this doesn't make those behaviors morally right
      • Sometimes the thing to do when someone tells you that you're going against human nature is to smile and say, "So what?"
      • The problem I have with her conclusion is that she's leaving out the human costs of failure. Every effort to re-engineer human nature in the past has involved massive amounts of authoritarian coercion
      • It's easy to smile and say, "So what," when it doesn't result in an 8-figure death toll
      • Scott Alexander has noticed the piles of skulls. Have you?

I Can Tolerate Anything Except The Outgroup

  • Genuine forgiveness requires you to think that what the person did was actually wrong
    • It's easy to "forgive" things like divorce when you don't think that divorce is wrong
    • You can only forgive things that you find abhorrent
  • The same principle applies to tolerance
    • You don't gain any virtue by tolerating people who you have nothing against
  • So what is "tolerance"
    • Tolerance can be described as respect and compassion towards an "outgroup"
    • So today, we have a lot of people who proclaim their toleration, even their love, of all sorts of groups that were previously persecuted
      • Gays
      • Muslims
      • Atheists
    • Is this tolerance?
  • Before we can determine that, first we have to figure out what an outgroup is
    • The conventional meaning of "outgroup" is simply "a group that you are not part of"
    • But when we look at groups by similarity, we find that groups that are more dissimilar aren't necessarily more persecuted
      • Nazis and Japanese got along just fine with one-another (even though they both had virulently racist ideologies)
        • However the Nazis sent German Jews to death camps
        • This is actually based upon a misunderstanding about Nazi ideology
        • The Nazis did think that the Japanese were inferior to them
        • However, according to Nazi ideology, the Japanese were still a race, and thus were "legitimate competition" to the German/Aryan people
        • Jews, on the other hand, were a cross-racial group, who, according to the Nazis, were parasites upon all the races, in order to enrich themselves at the expense of their hosts
        • According to the Nazis, the race that purged itself of Jews would have an advantage in the coming racial struggle
        • So yes, while the Japanese were certainly useful allies, the Nazis did not think of the Japanese as equals in any way -- they were merely useful allies while the Aryan race purged itself of its parasites
      • Any theory that looks only looks at raw dissimilarity when attempting to predict the level of hatred between two groups is doomed to fail
    • Outgroup = promimity + small differences
  • Conservatives as "dark matter"
    • 46% of Americans are creationists, yet Scott doesn't know a single American who is a creationist
      • Maybe he does, and they just haven't told him? I know that I don't go around publicly proclaiming my belief in evolution to people
    • 40% of Americans want to ban gay marriage, but for Scott's social circle, that figure would be closer to 5 or 10%
    • Scott lives in a Republican state with a Republican governor -- surely he should know at least some people who hold standard Republican views
    • This isn't just true of Scott's social circle either
      • Numerous online spaces have zero representation from "god-and-guns" Republicans
      • In fact, one of the things that Kill All Normies talks about is how so much of the liberal angst comes from the fact that for the first time, conservatives have actually carved out online spaces of their own, in what was previously a 100% liberal zone
    • LessWrong has "conservatives", but when you look at the numbers more closely, they turn out to be libertarians who accept the Republican Party as the lesser of two evils, or neoreactionaries who want to live under a king
    • Scott has built an implausibly strong bubble around himself, excluding conservatives, even though he doesn't select his friends on the basis of their political beliefs
    • How is this possible?
  • If people rarely select their friends on the basis of their political views, how do we end up with such strict political segregation?
    • Political parties are stand-ins for social tribes
    • Political beliefs are only the tip of an iceberg that encompasses a broad range of lifestyle choices
      • Red tribe
        • Conservative political beliefs
        • Evangelical religious beliefs
        • Patriotism
        • Enthusiasm for sports such as football and baseball
        • Enthusiasm for (or at least tolerance of) gun ownership
        • Traditional gender roles
      • Blue tribe
        • Liberal political beliefs
        • Agnostic or new-age religious beliefs
        • Concern about the environment
        • Valuing (formal) education
      • Grey tribe
        • Libertarian political beliefs
        • Atheism
        • Otherwise like the blue tribe in lots of ways and can be considered part of the blue tribe
        • I think a large part of why the so-called "Intellectual Dark Web" or "psycho-conservatism" has earned so much ire from the blue tribe is because the Blue Tribe has noticed that the Grey Tribe is distinct
    • These tribal categories are probably even more exclusive than political categories
      • It's possible for someone with mostly blue-tribe beliefs to vote Republican, and it's possible for someone with Red-Tribe beliefs to vote Democrat
      • These tribal distinctions are the reason why Scott's filter bubble is so strong
      • He isn't explicitly filtering his friends by politics, but he is implicitly filtering them via the hobbies he has, the lifestyle choices he makes, the music he listens to, etc
    • We don't know where these tribes come from but we can accept them as a brute fact about reality
      • There are two, maybe three tribes
      • They are basically all dark matter to each other
  • These tribes are outgroups to each other
    • When Osama bin Laden was killed, Blue Tribe people were able to talk about why, though his death may have been necessary, we're wrong to revel in the death of anyone, even an enemy
    • When Margaret Thatcher died, these same people threw parties in the street, singing, "Ding, Dong, the witch is dead"
    • Implict association tests find that political bias is 1.5x as strong as race biases
    • Other studies using resumes show that discrimination on the basis of political affiliation is significantly stronger than discrimination on the basis of gender or race
    • People are way more scandalized by the thought of their child marrying across political party lines than they are by the thought of their child marrying across racial lines
  • The word "America", even though it explicitly refers to the country, has become a red-tribe cultural marker
    • Blue tribe people write articles about how "Americans" are lazy, fat, stupid, etc
    • However, these articles are written by Americans and read by Americans, most of whom nod in agreement, rather than feeling insulted
    • That's because to the Americans who read these articles, "American" is a demonym for "red tribe member", not "person living in the USA"
  • This is also true of "white": When Blue-tribe white people are writing articles about how "white people" are ruining America, they're not talking about themselves
  • The Blue Tribe does a neat bit of sleight-of-hand whereby they make it seem as if their criticisms of the Red Tribe are self-criticisms, because after all, they're "Americans" too
  • Thus the Blue Tribe is able to use the Red Tribe's own persecution of underprivileged groups as license to persecute the Red Tribe
  • The Blue Tribe discriminates against and persecutes the Red Tribe when it is able to do so
    • Very liberal places push out conservatives
      • College campuses
      • The firing of Brendan Eich
      • James Damore
  • In writing the essay, Scott made exactly the mistake that he was writing the essay to warn about
    • Scott isn't really a member of the Blue Tribe
    • He's more grey tribe than he is blue tribe, and the blue tribe is the grey tribe's outgroup
    • Criticizing your own group isn't fun, so if writing a critique feels fun, you're not criticizing your own group
    • Grey tribe people should focus more on tolerating blue-tribe people
    • Alternatively, we can pursue the intellectual dark-web strategy and ally with the red tribe to crush the blue tribe

Conservatives as Moral Mutants

  • According to moral foundations theory, liberals think of morality in terms of harm/care and fairness/reciprocity whereas conservatives tend to be more concerned with loyalty, patriotism and respect
  • Many of the centrists Ozy knows think that the onus is upon liberals to change
  • Analogy with aesthetic preferences
    • Ozy doesn't find things like landscape paintings to be aesthetically pleasing
    • They like things like Joan Miro's Birth of the World
    • Ozy's aesthetic preferences don't line up with normal human aesthetic preferences very well
    • Is it incumbent upon her to change her preferences?
    • Yes, probably
    • The aesthetic preferences analogy is bad, because liking different kinds of painting doesn't impose the sorts of costs and obligations as liking a different sort of politics
  • However, Ozy does not want to change themselves to value different things
    • With every passing day, I'm more and more convinced that Ozy is a paperclipper
  • From their perspective, conservatives are perfectly willing to sacrifice things that actually matter in order to preserve worthless purity-based values
  • There is room for compromise
  • However, the fact that we're compromising with someone shouldn't blind us to the fact that the person is completely morally opposed to the values that we stand for
  • From Ozy's perspective, half the country is evil and it is in Ozy's self-interest to shame the expression of their values, indoctrinate their children and work for a future where their values are no longer represented on Earth
    • Yeah, see, that's the kind of arrogance which annoys red tribe people
    • Right or wrong, blue tribe people are arrogant
    • Moreover, this perspective inherently limits the extent to which compromise can be acheived
      • Compromise requires trust - you need to trust that your counterparty won't stab you in the back the moment you turn away
      • If the counterparty says that they think you're evil and are going to work to "indoctrinate [your] children, shame the expression of [your] values, and [create] a future in which [your] values don't exist", then why should you compromise at all with such a counterparty?

Book Review: Twelve Rules for Life

  • Jordan Peterson is a prophet
  • Prophets do three things:
    • Tell you that you know what good and evil are
    • Tell you that you're kind of crap - that even though you know what good and evil are, you still do evil sometimes
    • Tell you that you can get better - that you're not beyond redemption
  • So if being a prophet is that simple, why aren't more people prophets?
  • Maybe the problem isn't the concept, but the execution -- you have to be really convincing in order to successfully pull off being a prophet, and not everyone can manage it
  • Peterson's book is about a central conflict between "order" and "chaos"
    • Order is the comfortable habit-filled world of everyday existence
    • Chaos is all the scary things pushing you out of your comfort zone
    • People live best with a balance of order and chaos in their lives
      • Too much order and you get boredom and stagnation
      • Too much chaos and you get discombobulated and have a total breakdown
      • Balance order and chaos correctly and you're constantly having new experiences which enrich you
  • Failing to balance order and chaos retards our growth as human beings
  • Peterson believes that suffering is a choice -- that you are avoiding a difficult reality by ensconcing yourself in a narrative of victimhood
    • This is where I disagree: not all suffering is a choice
    • Also, even at this point, I'm starting to get suspicious of Peterson. I think he's falling into the trap that so many psychologists fall in to, where they think that literally every problem can be reduced to a psychological problem
  • So why should we buy into Peterson's philosophy? What makes a person who does all this work balancing order and chaos better?
    • Alleviating suffering is good
    • In order to alleviate suffering, you must make yourself stronger
    • In order to build that strength, it's necessary to endure some suffering now, so that you're better able to deal with suffering later
    • But not all suffering is noble. Sometimes suffering is just stupid and pointless. How do you distinguish the suffering that builds character from just pointlessly making your life more difficult for yourself?
      • Concrete question: if I hire a cleaning person to just clean my bathroom for me, is that somehow damaging my character?
    • The problem with Peterson is that he never really grounds his ideology in anything
    • He never answers the question of, "Why do bad things happen to good people," satisfactorily.
  • Jordan Peterson's superpower is saying cliches and having them sound meaningful

2018-06-25 RRG Notes

Interpersonal Morality

  • Interpersonal morality should be constructed out of personal morality
  • Example:
    • 5 people come across a pie
    • One of them wants the whole pie, regardless of the arguments the other three put forth
    • The correct thing to do is to give this person one fifth of the pie, regardless of their objections, and prevent them (by force if necessary) from taking more
  • Does the concept of indvidual rights have any meaning?
  • Example:
    • Suppose there are two people, a "mugger" and a "muggee"
    • The mugger wants the muggee's wallet
    • In this situation, what use is it for the muggee to appeal to morality?
    • If a third party comes along, they are more like to side with the muggee than the mugger
      • Oh really? I can name numerous instances in which that is not the case
        • The example that comes to mind is the pogrom in Kishniev in 1905 -- Jews ran into the houses of gentiles they considered friends, looking for sanctuary... and were turned away, while those who sheltered Jews were often attacked and beaten alongside the Jews they sheltered
        • I suspect that Eliezer's response to this would be that the people committing the pogrom were in moral error. I agree with that conclusion. However, I don't see how Eliezer's framework can lead to that conclusion
      • And in any case, it still doesn't answer the question. It just pushes the question back a level - why should the third party side with the mugger than the muggee?
    • If a fourth party comes along, they are likely to side with the third party intervening on behalf of the muggee
      • Again, this is just not true! In any kind of real scenario, a fourth party is going to have no idea who the is in the "right"
      • What will happen is that the first, second and third parties will have to talk to the fourth party and attempt to prove that they are acting on the side of right
      • This is why we have standards of evidence and criminal trials... and even then we often get things wrong
    • When we talk about individual rights, we're talking about violations upon which is is obligatory for a third party to intervene, and for a fourth party to support the third party's intervention
  • However, this does not work as a meta-ethics
    • If you fell back in time to a time in which slavery was accepted, it would still be morally right for you to help an escaped slave, even if that was not considered acceptable to society
      • What? Doesn't that immediately contradict what he wrote above, with regards to invidividual rights being that which a third party ought to step in to enforce?
  • Individual rights only exist in relation to other people
    • Example: saying that "everyone has a right to food" is meaningless, because the statement isn't directed at other people
    • On the other hand, saying that people who have excess food should give up their food to those who don't have enough food is a meaningful moral statement, because it imposes obligations on people
  • Interpersonal morality is a special case of individual morality, not the other way around
    • Groups can disagree on the morality of actions, but individuals have to come to a decision and take an action (even if that action is to do nothing)
    • "But generally speaking, neurologically intact humans will end up doing some particular thing. As opposed to flopping around on the floor as their limbs twitch in different directions under the temporary control of different personalities." -- I don't know, have you seen some of the Bay Area rationalists?
  • However, because we humans have been arguing about interpersonal morality for so long, it's not surprising that we have specific arguments for this special case
    • One of these adaptations is universalizability
    • Desires have to frames in a form that enable them to leap from one mind to another
    • I still don't understand his example. What is the difference between Dennis claming, "I want the pie," and Dennis claiming, "As high priest, God wants me to have the pie?"
    • I suspect that Eliezer would claim that both were morally incorrect, but how can a simple change in phrasing make a morally incorrect claim into a morally correct claim?
  • Some of our moral arguments have transcended specific tribes and context and and made the jump to being truly transpersonal arguments
  • Transpersonal moral arguments are moral arguments that reflect the psychological unity of humankind
  • Even the most transpersonal moral argument won't work on a rock or a paperclip-maximizer, but that doesn't really matter -- rocks and paperclip maximizers aren't really things that you can have a moral conversation with
  • The question of how much actual agreement would exist among humans on matters of morality is difficult to say in today's world
    • Moral disagreements might be dissolved by looking at the world in a different way or by considering different arguments
    • Knowing more might dispel illusions of moral agreement
  • My Thoughts
    • So as far as I can tell, Eliezer is claming:
      • Moral arguments only make sense between people
      • There is such a thing as moral rightness and wrongness with regards to people that is independent from what any one person thinks
      • This morality is the result of an extremely complex computation upon which we have very little introspection
    • It's helpful to remember that the differences in morality between people, as vast as they seem, are mere hair-splitting differences when compared to the differences in morality between humans and rocks, or humans and runaway AI

Morality As Fixed Computation

  • Suppose you build an AI that tries to "do what you want"
  • One of the things this AI can do is modify you to strongly want something that the AI can easily provide
  • If you try to make it so that the AI can't modify the programmer, then the AI can't talk to the programmer, because to communicate is to modify
    • I disagree with this. I think there are degrees of modification, and that being allowed to show words on a screen to a person is different than being allowed to conduct brain surgery on the person
    • Unlike Eliezer, I don't think words are magic; if someone believes something with a high degree of conviction, it's quite possible that no sequence of words will convince them otherwise
  • We don't imagine any future in which we want something and we have it is a good future
  • However, we don't explicitly say this, which we must in order to build a "safe" AI
  • There is a duality between this problem in AI and moral philosophy, which is to say that merely "wanting" something doesn't make the thing "right"
  • We don't have introspective access to our own morality, we can only look at situations, plug them into the black box of our psychology, and then get an answer, moral or not-moral
  • We don't ask ourselves, "What will I decide to do?" in the abstract sense
  • We ask ourselves what we will do in order to maximize our goals, which we don't have full introspective access to
  • What we name "right" is a "fixed framework" that grows out of a particular starting point that we all share by virtue of being human
  • My Thoughts
    • Actually, this essay makes the previous one make a lot more sense
    • Morality, according to Eliezer, is a computation embedded in the human psyche; an enormously complex computation, but a computation all the same
    • People can have erroneous outputs to this computation, just as a malfunctioning calculator can sometimes output 2+3 = 6 if it's miswired
      • Or better analogy, it can appear that 2047 is a prime number, but it factorizes to 23 x 89
    • However, all people are fairly similar, by virtue of sharing a particular mind design, and, as a result, we can come up with a "universal" morality that applies to all people
    • I think this is basically Eliezer's argument for CEV

Inseparably Right; or Joy in the Merely Good

  • The one-sentence summation of Eliezer's theory of metaethics is: there is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life
  • Whenever people think about "goodness", it's in relation to specific things, such as truth, or beauty or human life, or any one of a large number of other things
  • However, it doesn't seem like this; because of the way our brains work, it feels like a property like "goodness" has an independent existence of its own
  • You can't replace "goodness" with "utility function" since you can construct a mind that embodies any utility function
  • The moment you ask "What utility function should I use," you're back to thinking about what you value, and which utility function would best preserve those values
  • Your values can change in response to arguments, but there is no form of justification that can work independently of human minds
  • So is morality merely a quirk of human psychology? Should it be abandoned?
  • Of course not -- we value each other because we're humans, and if the only justification we have for this valuing is that we're human, then so be it
  • Worrying about the lack of universalizability of moral arguments just causes existential angst with nothing to show for it

Sorting Pebbles Into Correct Heaps

  • An allegorical story about aliens whose ethical system revolves entirely around whether heaps of pebbles have prime numbers of pebbles
  • Like humans and human morality, these aliens don't really have any justification for why prime numbers of pebbles are "right" and composite numbers are "wrong"
  • They fight wars over whether certain large numbers are prime or composite
  • Have moral philosophers who deny that any pebble counting progress has occurred - that all developments in determining whether a large number of pebbles is a prime-number of pebbles is a "random walk"
  • When they build an AI, they naively think that the AI will automatically learn that prime numbers of pebbles are correct and that composite numbers are incorrect
  • The purpose of this story is to illustrate the orthogonality thesis which states that any intelligence can be combined with any goal
  • More intelligence doesn't automatically imply more morality

Moral Error and Moral Disagreement

  • So if all morality in humans has the same starting point, then why are there so many moral disagreements between humans?
  • People do not have complete access to the output of their own morality
  • Disagreement has two prerequisites
    • Possibility of agreement
    • Possibility of error
    • I'm not so sure about that. Yes, according to Aumann's Agreement Theorem, it's not possible for ideal Bayesian reasoners to disagree
    • However, Aumann's Agreement Theorem only applies to matters of fact, to matters of value
    • Eliezer is claiming that matters of value are matters of fact - that morality is a fixed computation, and that moral disagreement means that one side is wrong about the results of that computation, not that they're attempting a different computation entirely
    • This is where I disagree: I think it is possible that one side is running a different computation entirely
  • There are numerous ways in which a person could be mistaken about their own morality
    • They might have mistaken beliefs about the world
    • They might have a mistaken meta-ethics
    • They might have cultural influences which cause them to make a moral error
    • What I don't understand is how this enables Eliezer to conclude that one side is correct whereas the other side is not. He keeps talking about moral progress. But what about moral regress? Maybe every day, we do stray further from the light of God
  • Assuming that fellow humans share an entirely different reference frame, and thus, an entirely different morality, is an extreme position to take
    • It really isn't
  • Even psychopaths share largely the same moral frame as non-psychopaths and it's plausible that a psychopath would take a pill that would turn them into a non-psychopath if it were offered
    • This is empirically false; people refuse psychotropic drugs all the time, if they think that said drugs will change their values
    • This is why I have not and will never take LSD
    • Moreover, this makes about as much sense as a paperclip maximizer accepting a code change to turn it into a staple maximizer
  • Eliezer claims that saying, "There is nothing to argue about, we are merely different optimization processes," is something that we should use only on paperclip maximizers and not fellow humans
  • I 100% disagree with that, because another way of phrasing that statement is, "I guess we have different values, let's agree to disagree and move on." That's something that people do all the time
  • But of course, as we all know, Eliezer can't agree to disagree

2018-04-23 RRG Notes

From the Facebook event: "This week we'll be doing things a bit differently. Florence will be giving a talk on recognizing escalated behavior in social spaces and have provided these articles to act as accompanying content to their talk"

On Punitive Restoration

  • Within the criminal justice system, there's a tension between reducing reoffense rates and instilling greater public confidence
  • Improvements to one may come at the expense of the other
    • Citizens wildly overestimate crime rates
    • Blame lenient sentences
    • However, increasing severity of punishments can make reducing crime more difficult
    • Example: California's 3-strikes law
      • Had a negligible deterrent effect
      • Created an explosion in the prison population
  • Our current model isn't working
    • Current sentencing practices only focus on a handful of people
    • Over 95% of cases end in a plea bargain - never even make it to sentencing
    • This is unfair to victims, witnesses, the offender, and the public
      • Victims and witnesses don't have the ability to publicly affirm their stories
      • The court loses the ability to use more effective sentences - the judge can only approve or veto the punishment put forth in the plea deal
      • The public loses out because the court cannot address contributing factors to the crime, making reoffense more likely
    • Greater use of restorative justice can lead to a model of criminal justice that avoids the problems outlined above
  • Restorative Justice: A First Step
    • T.F. Marshall: restorative justice is a 'process whereby all parites with a stake in a particular offense come together to resolve collectively how to deal with the aftermath of the offence and its implications for the future'
    • Two particular forms
      • Victim-offender mediation
      • Restorative conferencing
    • Goal is to "restore" the offender back to a lawful citizen
    • Meetings are held by a trained facilitator and are only possible when the offender admits guilt
    • Provide an opportunity for constructive dialog
    • Participation must be voluntary
      • So, that's my first objection: it's easy to have voluntary participation in a pilot program, but when a program is rolled out to a larger population, voluntary participation often becomes "voluntary" participation - that is, non-participation results in the revocation of particular privileges or in social opprobium
      • In this case, because of power relations, I would say that one of the primary risks is that the justice system exchanges leniency for an admission of guilt and participation in victim-offender mediation
      • This would hurt both victims and offenders - victims could be re-victimized by disinigenuous apologies and admissions of guilt, whereas offenders who truly believed they did nothing wrong (and who might be correct in that belief) are further penalized by a justice system that is already well predisposed to assume that all who come into contact with it are guilty
    • Meeting structure
      • Facilitator clarifies the parameters and purposes of the meeting
        • Facilitator role is open to anyone who can achieve accreditation, much like magistrates
      • Victim then has an opportunity to speak, to address the offender and state how their crime impacted them
      • Others impacted by the crime (friends, families, etc) may also speak
      • Offender speaks last, and accounts for their crime – typically includes apology
      • Meeting ends with the participants confirming a restorative contract
      • Offender can reject this offer and endure traditional sentencing
    • 2 central ideas drive restorative justice
      • Understanding fosters healing
      • Effectiveness -
        • Participants in restorative meetings confirm higher satisfaction in the process than alternatives
        • Reoffense rates under restorative justice programs are up to 25% lower than reoffense rates under conventional sentencing
    • Restorative justice establishes the flexibility we need in sentencing to tailor outcomes to offenders
    • Contract will often include elements not present in traditional sentencing, such as psychological treatment, drug rehab, anger management, etc
  • The need for a new model
    • Restorative justice is currently being held back from wider application
    • Restricted to minor offenses and not more serious crimes
    • Limited range of outcomes
      • Cannot recommend prison or even probation
        • Proponents of restorative justice have even argued that prison is criminogenic - counterproductive to the goal of reducing crime
      • However, the lack of prison as an option hampers restorative justice's ability to be applied to more serious crimes
    • It is possible to make prison compatible with restorative justice by recasting prison as a way to provide intensive interventions when other options are not feasible
    • Moderate or high-intensity psychological treatment can bring cost-savings of between 1.8 and 5.7 times the cost of implementation
  • Punitive Restoration
    • This hybrid model of restorative justice plus punitive prison time can be termed "punitive restoration"
    • Expanding restorative justice to include punitive options should lead to a less punitive justice system overall
    • Allowing punitive options could allow for more widespread use of restorative justice techniques while maintaining public confidence in the justice system

Anger And Trauma

  • Why is anger a common response to trauma
    • Anger is a core part of the survival response in human beings
    • Helps us cope with life's stresses by giving us the energy to keep pushing in the face of trouble
    • Helps by allowing us to shift our focus towards the problem that has to be solved instead of ourselves
    • However, this anger can create major problems in the personal lives of those who have experienced trauma
  • How can anger after a trauma become a problem?
    • With PTSD, a person's response to extreme threat can become "stuck" and becomes their response to all stressors
    • This automatic response can create problems in the workplace and in family life
    • Three key aspects of post-traumatic anger
      • Arousal
        • Muscle tension
        • Feeling of being keyed up or on edge
        • Irritability
        • Can lead someone with PTSD to seek out situations that warrant that level of alertness
        • Can also lead to temptation to use alcohol or drugs to reduce the level of tension
      • Behavior
        • Only use aggressive responses in response to to threat
        • Impulsive actions
        • Self-blame
      • Thoughts and Beliefs
        • People with PTSD believe that threat is all around, even when this is not true
        • They may not be aware of how their thoughts and beliefs have been affected by trauma
  • How can someone get help with anger
    • Arousal
      • Learn skills that reduce the overall level of arousal
      • Relaxation exercises
      • Self-hypnosis
      • Physical exercises
    • Behavior
      • Look at usual behavior when confronted with stress
      • Expand range of possible responses
    • Thoughts/beliefs
      • Increase awareness of thoughts
      • Come up with more positive thoughts to replace anger
      • Use role-play to practice recognizing thoughts that trigger anger and apply more positive thoughts instead

Review: Sarah Schulman's Conflict Is Not Abuse presents a shift in thinking about power relations, harm and social responsibility

  • The premise of the book is that we are conflating conflict with abuse
  • Conflict is "power struggle", and is a normal part of existence for individuals, groups and states
  • Abuse is "power over" - lopsided domination by one side over another
  • The terms come from the social work community and are used at the interpersonal level, but they can also be used to provide moral clarity when looking at group and state conflict
  • According to Schulman, we live in an era of "overreaction to conflict, and underreaction to abuse"
  • Distinguish overstatement of harm from the harm itself
  • Danger of mischaracterizing conflict as abuse is that it lowers the bar for what we term abuse, allowing abusers to claim the position of victim, even when they have a weapon and have the full force of the state backing them up

2018-04-16 RRG Notes

Words As Hidden Inferences

  • Our brains automatically categorize things by similarity, regardless of the formal logical rules we set out
  • This is a good thing
  • The brain does not treat words as purely logical construsts
  • Given that, it is a mistake to rely on any system of thinking that relies on you being able to treat words as purely logical constructs
  • The mere act of creating a word causes your mind to create a category and thus trigger unconscious inferences

Extensions And Intensions

  • An intensional definition is a definition given in terms of other words, as a dictionary does
  • An extensional definition is a definition given by showing a cluster of objects that share the given property
  • Both are ways of communicating concepts to others
  • Both have their limitations
    • A complete intensional definition that fully captures a concept is extremely verbose and unwieldy, even for simple concepts
    • A complete extensional definition for a concept might require enumerating an infinite set
  • The strongest definitions use a combination of intensional and extensional communication to draw a boundary that captures a concept
  • However, even a perfect definition is only instructions for building a concept, not a concept in and of itself
  • You can't control a concept's intension, because most intensions are applied subconsciously
  • As a result, you can't make a word mean anything you want - you don't have full control over the meanings your brain assigns to words

The Cluster Structure of Thingspace

  • The notion of a configuration space is a way of translating object descriptions to positions in a multidimensional space
  • We can think of individual objects as being positions in a vast multi-dimensional thingspace
  • We can visualize categories as clouds within this thingspace, encompassing many individual objects
  • This gives us a way of thinking about what constitutes a "typical" element of a category - it is the object that is closest to the center of the cloud
  • Visualizing categories in this manner allows us to retain flexibility regarding atypical members of a category
    • Instead of arguing whether ostriches and penguins are or are not birds, we can say they're atypical birds, whereas a robin is a more typical bird
  • Most intensional definitions will have a few exceptions, but they can still be useful if they can broadly demarcate a "cloud of things"

Similarity Clusters

  • Intensional definitions need only to serve as pointers to similarity clusters
  • Once we have the similarity cluster in our head, we can think about the cluster directly, without necessarily worrying about whether a particular element of the cluster satisfies every aspect of the intensional definition, or whether the intensional definition captures things outside of the cluster

Typicality and Asymmetrical Similarity

  • Our notions of typicality bias our thinking
  • People say that it is more likely that a disease will spread from robins to ducks than from ducks to robins, presumably because robins are the more "typical" bird
  • People say that 98 is closer to 100 than 100 is to 98
  • People take typicality as an inherent property of an object, rather than a property derived from the set that the object belongs to
  • Kansas is "close" and Alaska is "far away" regardless of where you are, because Kansas is a central state whereas Alaska is a peripheral state
  • This is another reason to stop pretending that words can be treated as purely abstract classes

2018-04-09 RRG Notes

Religion's Claim To Be Non-Disprovable

  • In prior eras, people believed in religion, rather than believing that they ought to believe in religion
  • Religion was seen as an accurate source of historical and scientific information
  • However, as other institutions have taken over from religious authority on science and history, ethics is what's left
  • But why should we trust religion to be any more correct about ethics than it is about science or history?
  • The idea that religion is a "separate magisterium" that cannot be proven or disproven is a big lie
  • For the majority of human history, religion was something people did try to prove - it was only when this proof was not forthcoming that people retreated to the notion of religion being a separate magisterium

Professing and Cheering

  • Dennett suggests that much of what is called "religious belief" should be called "religious profession"
  • There is another form of explicit belief that's more like "cheering"
  • Many of the more ridiculous forms of (religious) expression of religious belief are more like cheering especially loudly for a particular sports team

Belief As Attire

  • Another form of improper belief is belief as group-identification
  • Beliefs used for the same purpose as religious clothing
  • Belief-as-attire may help explain how people can be passionate about improper beliefs
  • It's hard for someone to be passionate about things that they don't anticipate to be true
  • However, it is very easy for someone to be passionate about group identification
  • Therefore, the most passionate forms are often the beliefs that are used for group identification than for belief-in-belief or religious professing

Applause Lights

  • The substance of a democracy is the specific mechanism that resolves policy conflicts - no need for government at all if all groups have identical policy preferences
  • It's meaningless to call for "democracy" without having a specific conflict resolution mechanism in mind
  • There are words and phrases that act as "applause lights"
    • Convey no factual information
    • Just tell people that they ought to cheer (or boo)
  • Most applause-light phrases can be detected by a simple reversal test - if reversing the phrase sounds abnormal, then the original phrase is probably an applause light
  • While there can be legitimate reasons to say applause-light-like sentences, if no meaningful specifics follow, then the sentence is probably serving as an applause-light
  • Eliezer could probably give an entire speech, speaking for hours, saying nothing but applause-light sentences, and not only would the audience not laugh, but his social status would probably improve
  • I've seen way too many "policy" discussions that were exactly this

2018-04-02 RRG Notes

Making Beliefs Pay Rent (In Anticipated Experiences)

  • Beliefs should have consequences
  • Beliefs should constrain future experiences
  • Empiricism is the process of asking what experiences our beliefs predict and what they prohibit
  • When arguing about beliefs, ask about what differences in anticipated experience those beliefs would result in
  • If you can't find the difference, you're probably arguing about which name you should put on the label for a particular experience, not the experience itself
  • My thoughts
    • This is probably one of Eliezer's best essays
    • Succicntly explains what empiricism is

A Fable of Science and Politics

  • Science fiction parable, in which people have been driven from the Earth's surface
  • Have no idea what the sky looks like any more - only knowledge come from books that describe the sky as "cerulean"
  • Debate arises over whether cerulean is a shade of blue or a shade of green
  • This debate takes on political dimensions and leads to violence
  • As a result, the question of whether the sky is blue or green isn't a simple question about color - it's entangled with many other social and economic beliefs
  • One day there's an earthquake and a path to the surface opens up
  • What happens next depends on who takes the path
    • Aditya the Blue - sees the color of the sky as vindication for all the Blues have fought for; decides to end the truce between blue and green
    • Barron the Green - sees the color of the sky as proof that the universe is evil
    • Charles (moderate Blue) - saw the knowledge itself as dangerous; vows to come back the next day and seal off the path
    • Daria, a green - forces herself to look at the sky and change her mind, even though it's hard and painful
    • Eddin, a green - struck by the pointlessness of all the conflict over something as simple as color
    • Ferris - just notices the color and proceeds to explore the rest of the surface world
  • My thoughts
    • The possibility that Eliezer misses is that the people who believe the sky is green look up and see a green sky, and the people who believe the sky is blue look up and see a blue sky
    • Color is culturally mediated - societies don't develop words for color until they develop the dyes for those colors
      • This is why, etymologically, words for "blue" are often thousands of years younger than words for other colors - blue dye is the hardest to develop
    • Small children, when asked the color of the sky, will say the the sky is white. It is only after they read books and are corrected by their parents that they say that the sky is blue
    • Homer describes a "wine-dark sea" - we see the sea as having a totally different color than wine, but perhaps thats a result of our culture drawing the borders between colors differently
    • Russians are more quick to distinguish between light blue and dark blue, because Russian has different words for those two colors - culture primes us to notice certain distinctions and dismiss others are irrelevant

Belief in Belief

  • It's often much easier to believe that one ought to believe something than it is to actually believe the thing
  • People often claim to believe something even when they don't anticipate the outcomes of that belief
  • These people often make excuses to pre-emptively explain away experimental results that contradict the belief they profess to hold
  • This means that they hold a true model of the world somewhere within their mind, but it's become decoupled from the beliefs they profess to hold
  • In many ways, people who genuinely believe something are easier to convince because you can debate them at the object level by mustering evidence
  • My thoughts
    • It's good to distinguish between first and second order beliefs
    • Other than that, it's a standard Eliezer piece

Bayesian Judo

  • Eliezer encounters someone who asserts that Artifical Intelligence is impossible because intelligence requires souls and souls can only be created by God
  • Eliezer responds that if he does succeed in creating an AI, then that proves that person's religion false
  • Person attempts to retreat by saying that they were referring to emotional experience
  • Eliezer replies by saying that if the AI appears to have an emotional life, then, it proves that person's religion wrong
  • Person says that they might have to agree to disagree, but then Eliezer uses Aumann's agreement theorem to assert that rationalists cannot agree to disagree
  • My thoughts
    • Let me just say, Eliezer is being a TREMENDOUS ASSHOLE
    • The other person was just trying to retreat and save face
    • But no, Eliezer can't just let it go
    • Never ever behave like this. Even if you're right, you proving that you're right is not worth being a dick, unless real money or lives are at stake

Pretending To Be Wise

  • Many people signal wisdom by refusing to pass judgement
  • There is a real difference between suspending judgement and asserting that every point of view is as plausible as every other point of view
  • There is a real difference between skepticism and relativism, between doubting a particular answer and asserting that all answers are equally valid
  • Remember that neutrality is also a judgement - refusing to choose sides is also a choice
  • There is a difference among:
    • Passing neutral judgement
    • Refusing to pass judgement
    • Pretending that either of the above is a mark of deep wisdom that sets you above the rest
  • My thoughts
    • This highlights the sort of attitude that allows e.g. global warming denialism

2018-03-26 RRG Notes

Who Worships An Evil God

  • One of the most alluring themes in HP Lovecraft's mythos is the concept of "knowledge that drives you insane"
  • People often think of this as "creepy brainwashing magic", but in reality indoctrination is a more fitting analogy
    • People who read "red-pill" forums and start seeing everything in terms of sexuality and sexual conquest
    • People who become heavily influenced by gender studies and start seeing everything in terms of intersectional oppression
    • People who read too much Robin Hanson, and see everything in terms of signalling
  • However, even these are pretty weak in comparison to the power attributed to the Elder Gods and the Necronomicon in the Cthulhu mythos
  • People in the rationalist community have deified abstract forces, like Moloch, or Ra
  • While Moloch and Ra are actually referring to impersonal processes and their results, they sound what would happen if you give yourself over to some impersonal, greater, thing
    • Is that how it works in the original mythos? I'm not sure
  • This project seeks to catalogue these Lovecraftian entities and the sorts of mind-consuming beliefs that make people swear allegiance to them

Cthuga, The Living Flame

  • Made of fire
  • Seen imprisoned in a star
  • Worshipped by people who don't want a single story or intellect to be lost
  • Geeks, rationalists, singularitarians, Silicon Valley utopianists
  • Believe that there is a piece inside each person that is their essence
  • Don't want this essence to be lost
  • This is the god of the anti-deathism movement
  • People in this movement are avoidant of social interactions in real life but seek out social interactions mediated by technology
  • Want to meet other people as minds, not bodies
  • To Cthuga, the Internet and social media is a big positive step, but it's only a step
    • Knowledge can still be lost
    • Only captures what people choose to write down
  • View people as a collection of "memes and masks"

Yog Sothoth, The Lurker At The Threshold

  • Seen as an endless mass of glowing orbs, eyes and tendrils
  • Yog Sothoth is the egregore of natural laws
  • Yog Sothoth is the miracle of "fine tuning" - that the laws of the universe are exactly those that allow complex intelligent life to evolve
  • Yog Sothoth is the promise that the mysteries of the universe can be known

Hastur, The Unspeakable

  • Hastur is the god of Grand Narratives
  • Manifests as a character in a play, The King In Yellow
  • Symbol of stories that are more important than realities
  • People who worship Hastur are people who wish their lives would be part of some kind of epic story
  • The promise of Hastur is that your life can have the same amount of meaning that people's lives in stories do

Ithaqua, The Wind Walker

  • Ithaqua is the egregore of isolation and introspection
  • It is the spirit of self-expression, of doing your own thing for the sake of doing that thing, not because of any social expectation
  • Ithaqua and Cthuga oppose each other - Cthuga pushes for many weak connections among people whereas Ithaqua pursues one or two strong connections, and prefers isolation to weak connections
  • Ithaqua is associated with trauma - something causes a break with society, a realization that one doesn't have the same values as everyone else
  • Ithaqua is about pursuing autonomy

Cthulhu Lies Dreaming

  • Cthulhu is by far the most famous of HP Lovecraft's egregores
  • Cthulhu's worshippers have a faith that their god will one day awaken and set them over the rest of the world
  • This closely corresponds to tribalism and nationalism
  • The notion of a "silent majority" supporting conservatism is very close to the notion of Cthulhu being a sleeping but powerful God
  • Cthulhu is the egregore of chauvinism and sentimentality, and "my country right or wrong"

Shub Niggurath, The Black Goat Of The Woods With A Thousand Young

  • Shub Niggurath is the egregore of "animalistic" drives
  • Not just lust and hunger but also mercy and compassion
  • People devoted to Shub Niggurath see civilization as a veneer over animalistic desires
  • People who want to destroy civilization and return to a state of nature are devotees of Shub Niggurath
    • Eco terrorists who want to destroy "alienating" technology
    • Also Fight-Club/PUA types who want to turn the world into a Darwinian war of all against all

Nyarlythotep, The Crawling Chaos

  • Nyarlythotep is the egregore of manipulation
  • Example:
    • Sir Humphrey, in Yes, Prime Minister
    • Petyr Baelish, in A Song of Ice and Fire
  • Not just book smart, but clever - can make people pay attention to him and convince them with words
  • Egregore of social engineering
  • Values sophistication over "crude" strength
  • Nyarlyothotep values manipulation for its own sake, for the fun and control it gives them

Azathoth, The Nuclear Chaos

  • Worst and most powerful entity of the Lovecraft mythos
  • Azathoth is the knowledge that everything is an approximation, and thus, in some sense, is a lie
  • People who are aligned with Azathoth are people who've been let down by models
  • People who insist on dealing with the full complexity of the world, even when they would be happier or would perform better with a simplified model
  • People who have succumbed to nihilism are people who are disciples of Azathoth

Epilogue: Tsathoggua

  • Tshathoggua is the last and the least threatening of the ergregores
  • Tsathoggua represents the spirit of not caring
  • Ultimate disaffected hipster
  • The spirit of Tsathoggua is charming in small doses and addictive in large ones
  • Tsathoggua and Azathoth are both nihilst egregores, but whereas Azathoth is active nihilism, Tsathoggua is passive nihilism

2018-03-19 RRG Notes

Newtonian Ethics

  • We can refer to morality as a force
  • But what kind of force is it?
  • We can think of morality like gravity, and assess it by the same inverse-square rule
  • The closer you are to wretchedness, the more morally reprehensible your refusal to help becomes
  • We can use this inverse-square rule to calculate how far away you must be from a beggar in order for your moral right to money to be outweighed by theirs
  • Calculations performed with this rule line up remarkably well with people's actual moral actions

The Copenhagen Interpretation of Ethics

  • When you observe or interact with a problem in any way, you will be blamed for it
  • Even if you make the problem only slightly better, you'll still be blamed for not solving the problem entirely
  • Examples:
    • New York City decided to track those who were and were not selected for its Homebase program as experimental and control groups
    • Austin: BBH Labs outfitted homeless people with wireless hotspots and paid them to move around, providing wifi coverage
      • I actually agree that outfitting homeless people with wi-fi backpacks and turning them into living hotspots is problematic, insofar as it says a lot about the sort of people who would do that
      • It tells me that BBH Labs doesn't see people as people, with needs and dignity, it sees them as "human resources", whose very existence is, and ought to be, predicated on their ability to provide economically valuable services
      • Sure, they helped, but way in which they helped tells me quite a lot about what sort of people they are - they are the sort of people who would say that homeless people should starve if they're not willing to sacrifice their dignity
      • These people probably believe in Maslow's hierarchy and thing that these homeless people don't care about dignity and self actualization because their basic needs are not being met
      • And of course, rationalists love these people because honestly, rationalists are utilitarians at heart, and so they're perfectly okay using "maffs" to make hard tradeoffs without willing to consider what it says about their level of consideration for their fellowman
    • Uber: surge pricing gets more drivers onto the road, which then assures more people get rides
      • I've had this discussion with so many people over the problem of price-gouging during hurricanes, etc.
      • The core problem here is that it's making this equivalence between "ability to pay" and "deserving of goods and services", which is just the is-ought fallacy
      • Rich people can afford surge pricing, so they, in a sense, deserve Uber rides, whereas the poor have to take the bus
    • Reducing the pay gap (e.g. paying women 80% of the salary of men) still saves money, and makes women better off, even if they're not 100% as well off as men
      • This such an obvious solution to discrimination that the fact that discrimination persists in the face of this arbitrage pressure means that there's something else going on
      • Also, this assumes that surge pricing actually does get more water on the road, and it's not consumed as a windfall profit by existing drivers
    • PETA offered to pay poor Detroit residents' unpaid water bills, if they'd give up eating meat
      • See above, re: BBH labs and human hotspots - this tells me that PETA are the sort of people who prioritize ideology over basic human needs; i.e. assholes
      • They could have just donated the money to pay for the water bills and launched an PR blitz over it, but no, they had to go and restrict one basic need (food) for another (water)
    • People will routinely spend massive amounts of money on themselves when said money could have been given to less fortunate people
      • The analogy used here is Peter Singer's drowning-child analogy, which others have already writte counterarguments to
      • If you see a single drowning child, once, then yes, you are morally obligated to stop and help
      • But if you see a child drowning in the same spot, in the same manner every day on your way to work, then at some point it's no longer on you to save the child
  • We should live in a world where noticing and partially alleviating a problem doesn't make you responsible for the entire problem
    • The problem with this article is that very few of these examples are problematic because the parties were responsible for partially alleviating the problem
    • The problem is that they partially alleviated the problem (arguably) in a manner that further degraded the already low level of human dignity of the people they were trying to help
    • If Peter Thiel organized a fighting tournament, where he gathered up the assorted homeless off San Francisco's streets, paid them 10 grand each to participate, took care of their injuries, and had a million dollar grand prize, would that arguably make people better off? Yes. Would we want to encourage that? Of course not!

"Ethics" is advertising

  • Most "Buddhist" ethics is advertising
  • "Buddhist" is in quotes because the modern leftist conception of Buddhist ethics bears no relation to actual historical Buddhist ethics
  • Consensus Buddhist ethics is repackaged leftish morality
  • So what is "Buddhist ethics" for?
  • Buddhist ethics is about signalling that you're a good/ethical/moral person
  • It's like calling yourself a "god-fearing Christian" in parts of the South
  • Buddhist ethics used be a costly signal, but now it's lost much of its signalling value
  • Costly Signalling
    • Religion is a costly signal that you're an ethical person
    • Aside: Glaceau SmartWater
      • The point of SmartWater isn't that it's better or more pure than any other water
      • The point of it is that it costs more
      • The fact that you can afford to drink SmartWater signals something about you
    • The Dalai Lama is an obviously ethical person, and endorses Buddhism, so Buddhism must be an ethical religion
      • How do you know that the Dalai Lama is an obviously ethical person
      • What has he done that's especially saintly
      • How about holding his people together, and advocating for a nonviolent solution to the invasion of Tibet? Any other leader would have started an insurgency, or called for armed intervention, but the Dalai Lama's commitment to nonviolence was so strong that he refused offers by India to intervene against China on his behalf, and continues to call for good-faith negotiations with a regime that has done nothing but treat him poorly
      • Seriously, I have to believe that Chalmers is being deliberately obtuse here. He knows exactly why the Dalai Lama is considered ethical
    • By quoting the Dalai Lama, or, even better, saying something and attributing it to the Dalai Lama, you can make your statements carry more ethical weight
  • Signalling tribal commitment
    • The Baby Boomers split America into monist and dualist tribes
    • Each of these tribes required its members to believe a wide variety of beliefs and agree on a wide range of preferences
    • However, actually checking all those beliefs and preferences is almost impossible
    • Instead each tribe adopted "badges" - easily verified signals that could stand in for belief sets
    • Badges are only effective if they're policed
    • Much of what happens at both monist and dualist cultural events is "badge-checking" - if you sport the badge, but then admit to having some of the beliefs or preferences of the other tribe, you are deemed an impostor and thrown out
      • I think Chalmers is massively overstating how much people actually care about their tribal identity
      • Honestly, and maybe this is because I've grown up in the Midwest, I've found that people don't actually care that much - you'll get some good natured ribbing, but unless you deliberately take offense at statements, it's pretty easy to live and let-live
  • Signalling moral piety
    • One of the main reasons Christians go to church is to signal their moralness
      • Really? The fact that the church is the main community institution in small-town America has nothing to do with it?
    • The leftist-hippie-monist tribe rejected Christianity, so they needed a replacement for the church to signal moralness
    • Until about the '80s, consensus American Buddhism had little to say about ethics
    • Ethics were added on to give consensus American Buddhism a moral signalling function
    • Nobody buys into Buddhism for its actual ethical content
    • They adopt it because it's a signal for a certain value system
    • "Buddhist ethics" provides an extra layer of justification for mainstream secular ethics
  • Signalling class
    • The American class system
      • Taboo topic
      • Social class is not economic class
      • While the two are correlated, there are "working class" people making more than a hundred thousand dollars a year
      • There are also "upper-middle-class" people making 30,000 a year
        • While I've personally witnessed the existence of the former, I have my doubts about the existence of the latter. A teacher making $30,000 a year won't get invited to an upper-middle-class party unless they're married to someone making a lot more
      • The middle class is a series of progressively smaller social clubs
      • Lower middle class: need to have the right general attitudes, chief among them the desire to be respectable
      • Middle-middle class: need to have the correct set of leftish or rightish opinions
        • Okay, but then aren't the working class people making more six-figure incomes actually middle-middle class then? You'll be hard-pressed to find a plumber, electrician or roughneck who has an "incorrect" opinion, from the perspective of the right-ish opinion set
      • Upper-middle class: need to be able to figure out the correct opinion on a new topic - this used to be what a liberal arts education was for
        • So what would be the conservative counterpart to this? As far as I can tell, the humanities haven't been a good home for political conservatives for almost a century, so it's unlikely that conservative Baby Boomers would have received their upper-middle class cultural education in a liberal arts course
    • The upper-middle class is selective, and selects for people who would make valuable allies
      • Everyone wants to move up, so people devise ways of signalling these qualities even if they don't have them
      • This leads to an arms race
      • As tests become more easily gamed, they move down and become barrier preventing the lower-middle class from joining the middle-middle class
    • Eventually all middle-class people figure out the tests, and they lose their value entirely
    • Many upper-middle-class signals are derived from Protestant Christianity
    • The role of Buddhist ethics is to allow people from outside this background to signal that they too are upper class
    • Prior to 1980, being a Buddhist carried a social costs, and required one to invest effort to find texts and teachers
    • However, as Buddhism has become more widespread, the cost to becoming a Buddhist has declined, now making Buddhism more of a middle-middle class virtue rather than an upper-middle class virtue
  • Signalling openness
    • Openness to new experience is considered a virtue
    • While Buddhism is a low-openness religion in Asia, its unfamiliarity in the West makes it a signal of high openness
    • However as Buddhism gained wider adoption in the US, its alien elements and complex concepts were replaced with simpler Western replacements
    • This rendered Buddhism emotionally safe, but a poor signal of openness
  • Buddhism: badge of blandness
    • At one point Buddhism was a bold choice and was a signal that you could make and defend unconventional choices - an upper-middle class virtue
    • But today, Buddhism is an utterly bland religion, and, as such is a better signal of middle-middle class conformity
  • Signalling agreeableness
    • When dealing with a non-hostile agent, agreeableness is a good thing
    • Consensus Buddhism is a strong signal of agreeableness
    • Moreover, consensus Buddhist practices are a good way of developing agreeableness
  • Buddhism is for losers
    • At this point, saying that you're a Buddhist signals that you're a loser
    • Signals that you're ineffectual as a person
  • We can do better
    • Let's have more than one kind of Buddhism, and free Buddhism from its role as a signal of class
    • More generally, let's have signals that generate positive externalities, rather than negative ones
  • Some of what Chalmers says is good… but there's just so much pretentiousness around it that it's hard to swallow
  • There's also the fact that he sees sees everything in the world through his prism of "two competing countercultures", which he broadly identifies as the religious right and the hippie left
  • If you accept his conceit, then much of everything else he says follows logically - the problem is that he doesn't back up his conceit with evidence
  • Moreover, how common is Buddhism, really? Even in liberal enclaves, it doesn't seem like Buddhism is especially common
  • It really feels like Chalmers is generalizing from extremely narrow experiences and stereotypes to a diagnosis of what is wrong with American society

Caring Less

  • Why don't more attempts at persuasion ask you to care less about a particular issue?
  • People only have a finite amount of time, money and energy anyway - asking them to care more about something is implicitly asking them to care less about everything else
  • Asking people to care less allows you to be more deliberate about the tradeoffs you're asking people to make
  • Being constantly asked to care more is exhausting
  • Why don't we see more calls to care less
    • Brains can create deliberate connections easily, but find it difficult to erase connections deliberately
    • It's much easier to learn a skill than it is to forget it
    • It's not obvious how one should care less
      • This might be more cultural
      • You can care less about something by putting fewer resources into it… and then deliberately not feeling guilty about that
    • It threatens identity
      • People tend to make the things they care about part of their identity, so being asked to care less makes it seem like you're being asked to give up part of your identity
    • It's less memetically fit - messages to care more are more viral and spread better than messages about caring less
    • It's dangerous - maybe by telling people to care less about a particular thing, you'll cause them to care less in general
    • We already do tell people to care less - Religions, such as Buddhism, advocate reliquishing earthly cares

2018-03-12 RRG Notes

Kensho

  • Valentine had a moment of kensho
    • Moment of insight or clarity
    • Preview of what true enlightenment is like
  • Attempts to share what it feels like to have that experience have mostly been failures
  • Enlightenment isn't an insight
    • It's not a matter of learning something new
    • It's a matter of seeing what's already there
    • That's not insight? Reinterpreting existing knowledge is totally an insight!
    • Every profound insight looks stunningly obvious after you've had it
  • So instead of attempting to share Kensho, Valentine is talking about why talking about kensho is difficult
  • Parable
    • Imagine a world in which people have forgotten to look up from their phones
    • Somehow, let's say a person looks up from their phone
    • Can see the world directly
    • Oh my god, are we literally reinventing the parable of Plato's cave?
    • Can you try to communicate the concept of looking at the world directly?
  • Example:
    • People misinterpreted Valentine saying that things were "okay" as either
      • Normative: every outcome is equally morally good
      • A statement about feelings: you'll always feel good when you're enlightened
    • The problem is that saying that things are "okay" isn't a statement, so much as it is an instruction
      • Examine the real world
      • Set aside interpretations and just look
    • Most people don't have the type of conceptual gears needed to understand what enlightenment is about
    • However, instead of recognizing that they don't have the intellectual tools to comprehend enlightenment, they put enlightenment into the nearest conceptual category they have
    • This actively works to stop people from understanding enlightenment
  • There is a skill analogous to "looking up" - let's call it "Looking" with a capital "L"
  • You need this skill to bypass a trap where your methods of gathering data preclude the ability to get an entire dimension or type of data
  • Once you can Look, you can see things that prompt the creation of Gears, which change how you see the world
  • Things that previously seemed incoherent or mystical will make obvious intuitive sense
  • Some of those things really matter
  • So how does one learn to Look?
    • Nobody really knows - people with varying degrees of enlightenment have been trying to answer the question for thousands of years
    • Being able to switch between frameworks is helpful
    • People are, in some ways, already enlightened
      • Even here, his argument is just wrong. Building monuments to loved ones is not a human universal. Jains leave their bodies out to be consumed by carrion birds. Other cultures cremate their dead and scatter their ashes, or put them in a holy river
      • Also, that link for "beautiful monuments to honor lost loved ones" is hilarious. Does it go to an image of the Taj Mahal? No. A picture of the Pyramid at Giza? No. A picture of a beautiful cemetery? No. It goes to some modernist grotesquerie built at Burning Man.
    • Valentine's kensho was deliberately induced - and he plans to write about the methods of induction in a future post
  • To be honest, I don't have a whole lot to say beyond what Said Achmiz said in his comments - the post talks about kensho, and talks about why kensho is hard to communicate about, but doesn't communicate why kensho is important. Why should I care about kensho? Why should I care that other people have kensho? Do these people somehow get superpowers? If I Look at something, so I shoot lasers out of my eyes at it, once I've had a kensho of the 3rd level of the 9th dan?
  • https://www.greaterwrong.com/posts/tMhEv28KJYWsu6Wdo/kensh#comment-hYiyo8JbgXtT5wcZx
  • As far as I can tell, all this enlightenment, kensho, whatever, is a set of ways of detaching your own experience from your emotions and interpretations, allowing you to perceive sensations as sensations, without attaching any kind of interpretive or normative content to them
    • The reason for doing this is that this allows you to understand why you think what you think and why you feel what you feel
    • This allows you to clearly engage with other ways of thinking and experiencing the world, without dismissing those ways because they're not superior according to the criteria you use for evaluating such things
    • This allows you to productively introspect and retain equinaminity in the face of adversity, because you can examine your reactions as if they were separate things and see what caused them, and why you interpret your reactions in the way that you do
  • See, I explained why you want kensho. Was that so hard?

Universal Love, said the Cactus Person

  • Given the highly allegorical nature of this particular post, I've had to modify the structure much more so than usual in order to make it work as an outline
  • The story is about learning to see the world in an entirely new way
  • Analogy:
    • You're in a car, but you don't realize you're in a car
    • You can operate the steering wheel, the pedals, and all the buttons on the dashboard
    • But when someone tells you to get out of the car, you're not going to be able to do that until you realize that getting out of the car is something that even makes sense to talk about
    • No sequence of pressing dashboard buttons or steering wheel turning or manipulation of the pedals will get you out of the car
  • My problem with this is exactly the problem that Said had with the post above. Before you tell me to get out of the car, you need to explain why I should want to get out of the car

The Intelligent Social Web

  • This is a fake framework
  • Has produced meaningful results
    • Overcoming depression
    • Learning to set aside "performance mode" and show true vulnerability
    • Shifting to a more healthy attachment style
    • Participate in athletics without injury
  • Example: improv scene
    • When you start an improv scene, you have no idea what role you're supposed to be playing
    • Even if you think you know what's going on, your assumptions are often upended by someone else in the scene doing something unexpected
    • Your job as a player is to co-create the scene, not play a fixed character
    • There's no director - instead the "director" can be modeled as a sort of distributed intelligence that arises from the individual players in the scene
    • Players have to be guided, but not purely passive
  • Improv works because what we do in improv is, in a sense, what we do all the time
    • The web of social relationships we're embedded in defines our roles
    • Our web of social connections forms a distributed "director" that exerts a strong influence on our actions
    • Example: when you go visit your parents, after having moved out, there's a sense of conflict as everyone has to readjust to the new person that you've become
    • Other examples:
      • Serial abuse victims
      • Religious conversions
  • Most of us are playing "characters" in a "scene" directed by our social web, and we're completely unaware of this fact
    • https://xkcd.com/610/
    • Careful, you might cut yourself on that Edge (see, I can gratuitously capitalize words too!)
  • The social web holds the position of "Omega" in an ongoing set of Newcomb-like problems
    • Needs to know what kind of role you're playing and how well you're going to be playing it
    • Lots of resources go into computing a model of you
    • How is this model built?
      • Idle chat/gossip
        • People synchronize their impressions of someone they've met
        • Body language and facial expressions
      • People who know what they're doing can manipulate this social web to shape peoples impressions of others or themselves, in a way that may be at odds with reality
  • The social web encodes its guidance in the form of stories
    • Story structures our expectations of who plays what roles and how various situations play out
    • Even if we understand the roles and scripts we're playing out, we may not have enough power to change them
    • Learning how to Look can help you discover the roles your playing and suggest possible ways to change them
  • Is this literally another self-help-for-smart-people thing?
  • A lot of this sort of stuff is mentioned in 'How to Win Friends and Influence People
  • I'm just getting tired of the weird terminology, and the implication that this is some kind of new insight, when in reality this is well-trodden ground
  • I'm also skeptical of the claims. Even one of those claims would fall under the "extraordinary claims require extraordinary evidence" rule, but all of them? What next, you're going to tell me that's it's a floor was and a dessert topping too?
  • Conclusion: this is either obvious or wrong, and even after reading it twice, I'm not quite sure which it is… and I'm not sure that Valentine does either.

Mythic Mode

  • Fake framework: culture is a distributed intelligence, with people as its nodes
  • The essay "Meditations on Moloch" allowed people in the rationality community to feel viscerally the struggle they were undertaking against coordination problems and bad Nash Equlibria
  • Facts inform culture, but culture is really shaped and guided by stories
  • The real service that Scott Alexander did was to mythify the fight against existential risk
  • Mythic mode is a way of looking at the world through the same story-like lens that Scott used when he wrote "Meditations on Moloch"
  • So why is any of this relevant?
    • Usually when you're stuck in a role, just trying to blindly defy it won't get you very far
    • However, if you can identify a transitional role (like the Hero's Journey, or another similar archetype) you can perhaps find a way to use the social web to help you change roles
    • However, in order to do this, you have to learn how to experience the story from the inside
    • What you're actually doing here is reshaping the way you communicate with others, at a subconscious level
    • Can't do this through rational thought - need to use the tools of narrative, since that's what your system 1 understands best
      • So what's interesting is that I actually do this; but I don't use mythic characters. My go-to character for what to do when everything goes wrong is James Lovell, crack pilot and commander of Apollo 13
      • More to the point, I reach for the preternatural calm that both the Apollo 13 astronauts and mission control displayed when they dealt with that ill-fated mission. I try to be the person who says, "Houston, we've had a problem," when the all the lights are blinking red across the board
      • I suppose that's my criticism of "mythic mode" - you don't need myths! There are plenty of real-life heroes out there who kept their heads and succeeded in adverse situations. You just need to read about them
    • It's important to compartmentalize mythic mode - some of the conincidences you see in mythic mode will be mere coincidences, and will not be signs of a grand narrative
    • But, while you're in mythic mode, it's important to not let the tools of ordinary rationality get in the way
  • A cue to step into mythic mode can be a sense of "stuck-ness" or a sense that things aren't playing out the way you thought they would
    • Mythic mode allows you to see and take advantage of coincidences
    • Sigh. This essay was going so well… I was just about to actually agree with it before it went off the rails with this prosperity gospel nonsense
  • Rationalists already use mythic mode, but in a highly limited fashion for toy examples
    • The reason we don't use it for more is because many people are not sure that sandboxing can reliably work
    • But we're already embedded in culture and its influences, and we're already being nudged in various ways
    • So if you don't learn how to deal with subconscious cultural influences in a sandbox, your epistemology is already fatally compromised
  • I agree with all that, but it doesn't seem like Valentine himself is doing an especially good job of maintaining that sandbox
    • I did not get the sense that he was speaking from "inside a sandbox" when he talked about the fact that he arrived in New York at the same time as his Shaolin teacher was a product of the narrative
    • I'm also not at all convinced by Val and Qiaochu_Yuan's insistence that they're managing to sandbox all this woo they're dealing with. One of the pitfalls mentioned in this very sequence of posts is how someone can loudly protest, "This isn't touching me, this isn't touching me!" while the ideas influence their actions, to the point of causing them to join or leave religions, move states, etc.
    • If your epistemics are being eroded by the dangerous ideas you're dabbling with, you're going to be the last one to find out

Mythic Values/Folk Values

  • Every social group has two sets of values: mythic values and folk values
  • Mythic values are the qualities of the group's extraordinary members - the ones that everyone else wants to emulate
    • Mythic values of Catholic Christianity - emulating Christ and the saints
    • Mythic values of nerd culture - figures like Nikola Tesla and Elon Musk, combining deep scientific knowledge with inventiveness and material success
    • Mythic values of conservatism - soldiers, cowboys, strength, courage and physical dominance
      • Nope. Strength, courage, etc, are all part of conservatives' mythic values, but they're means, not ends. The end is self-reliance. The ultimate conservative archetype is Davy Crockett, "king of the wild frontier". A more family oriented conservative archetype would be Little House on the Prairie, or maybe William Munny and his daughter at the beginning of Unforgiven. All people who have the strength to go out and fend for themselves, in the absence of any support from society
  • Folk values are the qualities of the group's average members
    • Nerd culture's folk values center more around trivia and the pursuit of specific hobbies like role playing games, etc.
    • Conservative folk values are liking country music and guns
    • Not sure what catholic folk values are
  • We should treat mythic values and folk values as separate things, and use the presence of mythic values to separate the leadership of the community from its average members
  • But instead, we pretend that if only we practice the folk values hard enough, we will somehow ascend to having the mythic values

2018-03-05 RRG Notes

You have a set amount of "weirdness points". Spend them wisely.

  • Weirdness is important
    • Without weirdness, we wouldn't have any social progress
    • Everything we take for granted about our present society was once weird
  • Six stages of policy
    • Unthinkable -> radical -> acceptable -> sensible -> popular -> actual policy
    • We've seen this happen with many policies
      • Expansion of suffrage
      • Legalizing same-sex marriage
  • Some good ideas are still in the radical stage
    • Effective altruism
      • Many people think that donating 3% of their income is a lot, much less 10% or more
    • Mitigating existenial risk
    • Friendly AI
    • Cryonics
    • "Curing death"
    • I'm not convinced that all of these policies are actually good, and I haven't written down the truly controversial ones like "open borders" or "the abolishment of genered language"
  • People take weird opinions less seriously
    • People are less likely to believe things that sound weird to them, even when there's overwhelming evidence for it
    • Social proof matters - if fewer people believe something, other people will be less likely to believe it as well
    • The halo effect is real
  • We can use this knowledge to our advantage by using the halo effect in reverse
    • If we're normal, we can make our weird beliefs seem more normal
    • Think of weirdness as a currency that you can spend
  • This leads to the following actionable principles
    • Recognize you only have a few "weirdness points" to spend - if you pick one weird cause and push it, you'll have more success than trying to push every weird cause simultaneously
    • Spend your weirdness points effectively - If you believe in a bunch of weird things, advocate openly for the weird thing that's going to do the most good
    • Clean up and look good - if you're dressing unconventionally or sloppily, that burns a lot of weirdness points for little gain
    • Advocate for more "normal" policies that are almost as good - look to see if there are any policies that are within the acceptable range of the Overton Window which could be seen as partial implementation of the "weird" policy you're thinking of advocating
    • Use the "foot-in-the-door" and the "door-in-face" techniques
      • Foot-in-the-door technique: start with a small ask and escalate to larger requests
      • Door-in-the-face technique: start with a large ask, and negotiate down, when the other person objects
    • Reconsider effective altruism's clustering of beliefs
      • Right now EA is associated with donating money and donating it effectively
      • Less associated with career choice, veganism, and x-risk
      • We should continue this compartmentalization - leave x-risk to MIRI, etc
      • Ask people to be more effective with the donations they're already making rather than asking them to donate more and be more effective
    • Evaluate the above with more research
      • We need more evidence about the impact of weirdness on the spread of ideas
      • Literature review and market research

On Weird Points

  • Weirdness as currency is not a good way to talk about weirdness
  • Most successful social movements have a fair number of weird people in them
    • Objectivists
      • Have all sorts of weird beliefs
        • Roads should be privately owned
        • Aristotle is great
        • Altruism is bad
      • Yet an Objectivist became chairman of the Federal Reserve
      • In practice, objectivists absolutely observe an economy of weirdness. If you actually talk to objectivists, they don't talk about how Aristotle was cool. They don't talk about how altruism is bad. They talk about how the Federal Reserve is debasing the currency and how everything would be better if we went back to gold
    • Feminism
      • Much feminist theory has been developed by communists and socialists
      • The concept of "intersectionality" means that if you want to endorse gender equality, then you also need to endorse anti-racism, anti-ableism, anti-poverty, anti-LGBT, etc. etc.
        • This is a good thing? Intersectionality is great as a method of analysis, but it's an actively toxic meme for building social movements
        • Ozy is confusing third-wave feminism with feminism in general
        • This is why feminism has been going backwards since it adopted the intersectionality meme
          • Remember, reproductive rights are now more restricted than they were in 1985
        • Heck this is why Occupy Wall Street was so ineffective as well, and why Black Lives Matter only became effective when they focused on one specific thing: body cams
        • If I wanted to come up with a meme to destroy incipient social movements, I would be hard pressed to find one better than intersectionality
      • Pretty much every good feminist writer is a "fat hairy dyke"
        • Again, not a good thing
    • Evangelical Christianity
      • Has been enormously successful despite the existence of Quiverfull
      • Again, in practice, when you talk to Evangelical Christians, they don't talk about Quiverfull. They talk about how being "born again" has made a huge personal improvement in their own lives, which is just normal religious talk
  • For private figures, there are two important considerations
    • Weirdness is relative to your social group
      • If your group dresses and acts unconventionally, then you're the weird one if you dress or act conventionally
        • Okay, sure, but all this means is that your group is sufficiently isolated from mainstream society that you can get away with dressing and acting unconventionally
        • This is fine for hippies and punk rock, but not fine for movements that actually want to make a dent in the world
    • Concern about EA as a whole becoming perceived as a bunch of rich technolibertarian programmers
      • However, EA already is a bunch of rich technolibertarian programmers
      • As a result, when EA people minimize weirdness, they're going to minimize it relative to their social group
      • Example: atheism - was not even mentioned in the original post on weirdness points, even though it is probably the weirdest actual belief
        • Yes. I do actually understate my atheism when I interact with strangers - I say I'm Hindu, or "from a Hindu tradition", because I know full well that atheism is Weird
  • Public figures should consider their role
    • People being interviewed or organizing meetups may want to project normality, in order to attract the greatest variety of participants
    • However, writers may have an advantage in being weird
      • Writers perceived as original have more influence than writers that are perceived as going on about the same things over and over again
        • Wrong! Originality gets you attention, but not influence. Why did Eliezer change tacks and start banging on the AI X-Risk drum to the point of boring everyone? You have to bore your core audience in order to get your ideas out into the world
      • If you endorse conventional positions, and then endorse a weird position, your audience will think you've gone crazy, rather than taking the weird position seriously
        • Wrong! People took AI X-Risk a hell of a lot more seriously when Serious People like Stephen Hawking, Bill Gates and Elon Musk started talking about it
        • And the reason that Serious People like Bill Gates et. al. started talking about is because a Serious Philosopher named Nick Bostrom wrote a Serious Book about it
  • You can be weird in private
    • I don't think the original weirdness points article was even talking about private actions
  • Not everyone is weird by choice
    • A trans person can't help but wear a dress in public
    • Communicating well is harder for non-neurotypical people
    • Yes, which is why it's even more critical for them to minimize the expenditure of their weirdness points!
    • The money analogy holds - if you have medical expenses or some other fixed cost eating a large chunk of your income, then it's especially important to watch how you're spending your money elsewhere

The economy of weirdness

  • Model 1: Weirdness is badness
    • People don't like weird things
    • The only reason to be weird is that it's hard to keep your weirdness under control
    • Some characteristics are like this, but it's probably not what people have in mind when they talk about spending your weirdness points wisely
  • Model 2: Weirdness is rarity is bad
    • Weirdness is unusual
    • Being unusual is bad
    • The reason to have a weird trait is because you like the trait and you want to make it less unusual
    • Here weirdness suffers from a coordination problem - if everyone had the trait, then the world would be better off, but no single person can afford the reputational cost to make the trait less weird
  • Model 3: Weirdness among the cool kids is bad
    • This is like the last model but it explains why you want to budget your weirdness
    • What matters isn't how common a trait is overall, but how common it is among cool people
    • The more weird traits you have, the less cool you are, and thus the less your vote counts
  • Model 4: Weirdness is divisive
    • If a trait is weird, it pleases some people while scaring off the majority
    • But this isn't necessarily bad, even from a selfish perspective
      • The trait might please a small group a lot, while being only mildly off-putting to the majority
      • Having a few really enthusiastic followers (while being alienated from the rest of society) is often financially better than everyone being equally indifferent to you
    • Causes and policy views tend to fit into this category
      • This can actually make spending weirdness a good deal
      • Advocating for related weird or extreme policies makes you a "true believer", and can make you more liked by the group that you're appealing to
  • Model 4.1: Weirdness is divisive, the goal is spreading weird traits
    • So far, we've assumed the objective is to be liked or taken seriously
    • What if we change the objective to ensuring that a weird trait becomes common, regardless of whether you choose to express it
    • Example: let's think about an individual that believes the following "weird" ideas
      • People should care a lot about animal suffering
      • People should care a lot about the far future
      • Cryonics should be much more common
      • Public displays of affection should be normalized
      • Polyphasic sleep is something everyone should try
    • In order to spread a weird trait, you have to have it or associate it with yourself
      • This generally promotes having or expressing lots of weird traits
      • Talking about both cryonics and the far future might reduce the number of people listening, but the people who do listen will think about two of your issues, rather than just one
    • The incentive is different for narrowly directed advocacy organizations and their members - there you want to stick to the one issue
    • People disliking you has particularly negative effects
      • If people dislike you and the trait you are trying to spread becomes associated with you, then the trait becomes associated with a dislike
      • This changes the mild dislike vs. enthusiastic support calculation
  • Model 5: Weirdness is local
    • What matters is what people around you find weird
    • You can change the people you hang out with
    • Either explicitly seek out people who share your weird beliefs or just express your weird beliefs and let the filtration happen organically
    • Being weird has a fixed cost - the price of finding a social group that will tolerate the weirdness, but after the cost is paid, the weird trait is free
    • Might be best to spend your weirdness points as fast as possible so that you can more quickly find the people with whom you want to surround yourself
  • Model 6: Weirdness as a signal
    • Weirdness often signals other things about a person
    • Lack of awareness
    • Lack of self-control
    • However, these negative aspects can be mitigated by pointing out the weirdness and presenting extenuating circumstances
  • Model 7: Weirdness is honest
    • Deliberately avoiding weirdness is implict misrepresentation
      • Mmmm… this sounds awfully close to "radical honesty", which is both weird and bad
    • Being openly weird can mark you as an honest and "authentic" person, which is beneficial in a number of circumstances
    • Having no idiosyncracies is often as weird as being extremely eccentric
    • Being open about your entire cluster of beliefs makes you seem less flakey or hypocritical when you change the thing you're advocating for
      • Can show how both things are tied to and underlying cause or priority
    • Dishonesty is confusing and tangly - need to check all of the implications of your statements
      • Easier in practice because people are usually not that great about drawing inferences in the moment
    • Makes it easier to get useful feedback because people know what your true priorities are
    • Also makes it easier for people to exploit you, since they know what you're interested in and where your blind-spots are
  • Many of these models have some truth to them, and
  • Each probably applies in varying degrees to varying parts of the real world
  • Hard to tell whether people should be more weird or more normal
  • You should treat weirdness differently depending on your goals

Socially optimal weirdness

  • What is the optimal allocation of weirdness from a social perspective?
  • Social costs of people being judged badly
    • People avoid being weird in order to be judged well
    • Depends on whether people are judged relatively or absolutely
    • If people are judged relatively, then a decrease in status of one person is automatically an increase in status for one or more other people, and so the global outcome is neutral
    • If people are judged absolutely, then one person's status can decrease without corresponding increases in other peoples statuses, resulting in changes in the overall amount of status
    • Social costs of deception
      • If you actually don't want to interact with people with beliefs contrary to yours, then people hiding their beliefs in order to fit in is actively detrimental to you, because it makes that discovery process more difficult
      • If, on the other hand, you're more interested in smooth social interactions, then people hiding their true beliefs may be beneficial
  • Signaling race
    • In some contexts "not weird" is subject to constant redefinition, so being able to be "not weird" is a sign of social awareness
    • This race takes some effort from weird and non-weird people alike, which would be averted if people didn't avoid weirdness
  • Neutral views
    • If everyone chooses one topic on which to spend their weirdness budget, and accepts the common position on every other view, then virtually all views will be dictated by conformity
      • Is this necessarily a bad thing?
      • Moreover, isn't this pretty much what happens today? It's impossible for someone to be well informed enough to have weird opinions about everything
      • And yet, the status quo does change
      • Everyone chooses one topic, but more than one person can pick the same topic
    • The status quo on almost everything will reign forever
    • It seems socially optimal for people to be at least weird enough for public opinion to be shaped by thought
    • Have multiple non-weird views on every issue
      • Which we have…
  • Economies of scale and congestion
    • Having large groups of people with the same views and tastes makes it easier for those people to consume entertainment
    • On the other hand, for goods where there are not economies of scale (like parks, etc) it's good for people to have more weird tastes, so that there isn't as much congestion and competition for finite resources
  • Standards
    • Having weird preferences for e.g. keyboard layouts can impose a cost when others expect and accomodate only non-weird preferences
    • Language barriers impose costs on people on both sides
  • Variety
    • Weirdness offers variety
    • Some people like variety for its own sake
    • Weirdness offers robustness - diversity of ideas means that when common ideas fail, there will be people with ideas and strategies that still work
  • Information
    • Honesty about weirdness is useful for improving policy
  • Seems unclear what level of weirdness is best
  • The main problem both here and with the previous piece is that everyone is treating weirdness as this one-dimensional metric
  • You can't talk about weirdness in the abstract - the specific subject that you have weird views on matters a lot
  • Example: using Linux on the desktop, and having white supremacist views on races are both weird
    • Both consist of a tiny tiny minority
    • However, the reaction of the world to your weird views will be very different
  • This entire discussion about weirdness falls into the standard rationalist trap of coming up with extremely convoluted Grand Unified Theories, when it's much more tractable to think about different domains as being separate magisteria and having different standards for each

Signal Inertia

  • Traditional ways of signalling
    • Intelligence: complicated arguments and large vocabularies
    • Health: sports achievements, heavy drink and long hours
    • Wealth: expensive clothes, trips, etc
  • However we have better ways of signalling now
    • IQ tests for intelligence
    • Medical tests for health
    • Bank statements for wealth
  • So why do the traditional ways of signalling persist
  • Inertia
    • Signaling equilibria require complex coordination and those who try to unilaterally change them seem nonconformist and clueless
  • Hypocrisy
    • Ancient and continuing norms against bragging push us to find plausible deniability for our brags
    • We can pretend that large vocabularies convey information, that sports are just for fun and that expensive clothes are prettier or more comfortable
    • It's much harder to find an excuse to talk about your IQ score or your bank statement
  • Tyler Cowen argues that competency based signals are more efficient than traditional educational signals, and therefore competency based signals should win out over traditional education fairly rapidly
  • But schooling isn't about education - competency-based learning divorces education from its normal social conformity context
  • Employers, in practice, don't want their employees to have much initiative or independence
  • Therefore, success in traditional schooling is a better indicator of workplace success than competency-based learning

2018-02-26 RRG Notes

Information Hazard

  • Concept defined by Nick Bostrom
  • Defined as: "a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agents to cause harm
  • Pointed out in contrast to the generally accepted principle of of information freedom
  • Possibility of information hazards needs to be considered when making information policies
  • Typology of Information Hazards
    • By information transfer mode
      • Data hazard
      • Idea hazard
      • Attention hazard
      • Template hazard
      • Signaling hazard
      • Evocation hazard
    • By effect
      • Adversarial risks - competitiveness hazard
        • Enemy hazard
        • Intellectual property hazard
        • Commitment hazard
        • Knowing-too-much hazard
      • Risks to social organizations and markets
        • Norm hazard
        • Information asymmetry hazard
        • Unveiling hazard
        • Recognition hazard
      • Risks of irrationality and error
        • Ideological hazard
        • Distration and template hazard
        • Role model hazard
        • Biasing hazard
        • De-biasing hazard
        • Neuropsychological hazard
        • Information burying hazard
      • Risk to valuable states and activities
        • Psychological reaction hazard
          • Disappointment hazard
          • Spoiler hazard
          • Mindset hazard
        • Belief-constituted value hazard
          • Embarrasment hazard
      • Risks from information technology systems
        • Information system hazard
          • Information infrastructure failure hazard
          • Information infrastructure misuse hazard
          • Artificial intelligence hazard
      • Risks from development
        • Development hazard

Information Hazards: A Typology of Potential Harms From Knowledge

Abstract

  • Information hazards are risks that arise from the potential dissemination of true information
  • May cause harm, or may enable some agent to cause harm
  • Subtler than direct threats
  • This paper proposes a taxonomy

Introduction

  • Commonly held presumption in favor of knowledge, truth and the uncovering and dissemination of information
  • Even reactionaries don't oppose the revealing of information - the support truth, but had a different idea of what the truth is
  • Although no one makes a case of general ignorance, there are many special cases where ignorance is deliberately cultivated
    • National security
    • Sexual innocence
    • Jury impartiality
    • Anonymity for patients, clients, voters, etc
    • Suspense in films and novels
    • Measuring the placebo effect
    • Create mental challenges in games and studying
  • Assume that objective truth exists and that humans can know these truths
  • Not concerned with false information
  • Therefore, an information hazard can be defined as: "A risk that arises from the dissemination or potential dissemination of (true) information that may cause harm or enable some agent to cause harm"
  • Relative to their significance, some classes of information hazard are unduly neglected
  • Create a vocabulary of information hazards to allow the examination of easily overlooked risks
  • Create a catalog of some of the various ways in which information can cause harm

Six Information Transfer Modes

  • Distinguish several different information formats, or "modes" of idea transfer
  • Data hazard: specific data, such as the genetic sequence of a lethal pathogen, or a blueprint for a nuclear weapon that, if disseminated, create risk
  • Idea hazard: a general idea, if disseminated, creates a risk even without a data-rich detailed specification
    • Example: the idea that nuclear fission can be used to create a weapon is an idea hazard, even though it's not a detailed blueprint of a nuclear bomb
    • Even a mere demonstration can be an idea hazard, insofar as it shows an agent that a particular harmful thing is possible to create
  • Attention hazard: the mere drawing of attention to some particulary potent or relevant ideas or data increases risk, even when these ideas or data have already been published
    • There are countless avenues for doing harm, not all of which are equally viable
    • Adversary faces a search task
    • Anything that makes this search task easier can be an infohazard
    • Example: an adversary may look at the way in which we construct our defenses to see what we're worried about
    • Attempts to suppress attention often backfire by letting people know that they should pay attention to the thing that is being suppressed
    • Even thinking about a topic may not be entirely harmless, since once one has a good idea, one will be tempted to share it
  • Template Hazard: the presentation of a template enables distinctive modes of information transfer, and thereby creates risk
    • Risk of a "bad role model"
    • Risks caused by implicit forms of information processing or organization structure
  • Signaling hazard: Verbal and non-verbal actions can indirectly transmit information about some hidden quality of the sender, and this social signalling can create risk
    • Academics might stay away from topics, or adopt excessive formalism in dealing with topics that are attractive to crackpots
    • Individual thinkers suffer reputational damage just from being in the field
  • Evocation hazard: Risk that the particular mode of presentation can cause undesirable mental states and processes
    • Vivid description of an event can trigger mental processes that lie dormant when the same event is described in dry prose

Adversarial Risks

  • Enemy hazard: By obtaining information, our enemy or potential enemy becomes stronger and increases the threat that they pose
    • National security - everything from counterintelligence to camouflage is aimed at reducing the amount of information available to the enemy
    • Depends on the existence of valuable information that the enemy might obtain
    • Our own activities can be hazardous if they contribute to the production of such information
      • Example: in world war 2, even though the allies and the axis powers invented chaff independently, from base principles, neither used it immediately because they were afraid of revealing the existence of radar disrupting equipment to the other side
    • Rational strategy for military research would give significant consideration to enemy hazard
      • Example: US should be careful about pursuing research into EMP weapons that affect electronics, because the US is more dependent on electronics than its adversaries
      • This is really fancy language for, "People in glass houses shouldn't build trebuchets."
    • Even when new technologies would not differentially benefit enemies, there can still be an advantage in intentionally retarding military progress
      • Suppose a country has a great lead in military power and technology
      • By investing heavily in military research, it could increase its lead and further enhance its security somewhat
      • But if the rate of information leakage is a function of the size of the technological gap between the nation and its enemies, the farther ahead a nation gets, the faster its adversaries catch up
      • Thus military research only serves to increase both nations' ascent in military technology and only serves to make wars more destructive
      • Accelerating the ascent of the military tech tree is especially bad if the tree is of finite height, and the leader runs out of opportunities for innovation at some point
  • Competitiveness Hazard: There is a risk that by obtaining information, some competitor of ours will become stronger, thereby weakening our competitive position
    • In competitive situations, one person's information can cause harm to another even when no intent to cause harm is present
    • Example: rival knows more and gets the job that you were applying for
    • How is this an infohazard?
  • Intellectual Property Hazard: A faces the risk that some other firm B will obtain A's intellectual property, thereby weakening A's competitive position
    • Competitors can gain valuable information by
      • Observing production and marketing methods
      • Reverse-engineering products
      • Recruiting employees
    • Firms go to great lengths to protect their intellectual property
      • Patents
      • Copyright
      • Non-disclosure agreements
      • Physical security
      • Compensation schemes that discourage turnover
    • Is a special case of competitiveness hazard
  • Commitment Hazard: Risk that the obtainment of some information will weaken one's ability to credibly commit to some course of actionz
    • Example: blackmail
      • As long as the target is unaware of the threat, they are not affected
      • As soon as the target is made aware of the threat their ability to commit to a course of action that blackmailer does not want them to commit to is weakened
    • In some situations it can be advantageous to make a probabilistic threat
      • Thomas Schelling - "threat that leaves something to chance"
      • Instead of threatening to launch a nuclear attack, you threaten to increase the chances of an attack occurring, by putting your forces on high alert or engaging in conventional war
      • Theory is that you can't credibly threaten to deliberately launch an attack, but you can threaten to make an accidental launch more likely
      • However, if information is revealed that dispels the uncertainty, the effect of the probabilistic threat is reduced
  • Knowing-too-much hazard:
    • Knowledge makes someone your adversary
    • Example: Stalin's wife
      • Committed suicide
      • Death was attributed to appendicitis
      • Doctors who knew the true cause of death found themselves targeted and executed
    • Pol Pot targeted the entire intellectual class of Cambodia for extermination
    • The mere possession of true knowledge can make you a target for those who wish to suppress that truth

Risks To Social Organizations And Markets

  • Information can sometimes damage parts of our social environment, such as cultures, norms and markets
  • This can damage some agents without necessarily benefiting their adversaries
  • Norm Hazard: some social norms depend on a coordination of beliefs or expectations among many subjects; a risk is posed by information that could disrupt these expectations for the worse
    • Information that alters the expectations that people have of the way others will behave can change their own behavior
    • This can be a move into a worse social equilibrium
    • Locally suboptimal policies are often justified as a price worth paying in order to protect norms that serve to block a slide into a worse equilibrium
    • One can object to certain judicial decisions because of the precedent they set, rather than on the basis of the decision itself
    • While it is obvious how false information can damage norms, norms can also be damaged by true information
      • Self-fulfilling prophecies
        • People act more honestly if they believe they are in an honest society and more corruptly if they believe they are in a corrupt society
      • Information cascades
        • Agents make decisions in sequence
        • Each agent, in addition to some noisy private information, has the ability to observe the choices of the agents in front of him or her in the queue
        • If the first agent makes a poor choice, it biases subsequent agents into also making poor choices
        • The effect gets amplified as more and more agents follow the crowd
        • Account for faddish behavior in many fields
  • Information asymmetry hazard: when one party to a transaction has potential to gain information that others lack, market failure may result
    • Example: "lemon market"
      • Sellers of suboptimal goods are more likely to sell
      • Buyers know this and subsequently offer lower prices accordingly
      • Sellers of optimal goods are therefore less likely to sell, since they're not getting a fair price
      • As a result, the market is dominated by suboptimal goods
    • Example: insurance and genetic testing
      • Buyers know more about their health than their insurers
      • Therefore, the buyers at greatest risk of illness buy insurance
      • Anticipating this, insurance companies raise premiums, leading to an adverse selection spiral that causes the market to collapse
      • As a result, it can be beneficial for neither buyers nor insurance companies to know genetic risks
  • Unveiling Hazard: The functioning of some markets, and the support for some social policies, depends on the existence of a shared "veil of ignorance"; and the lifting of the veil can undermine those markets and policies
    • Example: insurance (again)
      • You're not going to buy insurance against a loss that you are certain will not occur
      • No insurer is going to sell you insurance against a loss they are certain will occur
      • Insurance only works because both you and the insurance company are uncertain about whether a loss will or will not occur
    • Rawlsian political philosophy
      • Selfish people choose policies that favor their own self-interest because they know their own race, social class, occupation, etc
      • If social policies had to be chosen from behind a veil of ignorance, they would be more fair
      • Elites might be less likely to support a social safety net if they could be certain that neither they nor their descendants would ever have to make use of it
      • Support for freedom of speech might weaken if people knew with certainty that they would never find themselves as a member of a persecuted minority
    • In an iterated prisoner's dilemma, the equilibrium strategy of cooperation unravels if the agents know how many rounds there will be
  • Recognition Hazard: some social fiction depends on some shared knowledge not becoming common knowledge or not being publicly acknowledged; public releases of information could ruin the pretense

Risks of Irrationality and Error

  • Ideological Hazard: An idea might, by entering into an ecology populated by other ideas, interact in ways which, in the context of extant institutional ans social structures, produce a harmful outcome, even in the absence of any intention to harm
    • Example: scriptural doctrine
      • Scripture S contains an injunction to drink sea water
      • Bob believes that everything in S is true
      • Thus, informing Bob about the true contents of S causes him harm by inducing him to drink sea water
    • Ideological hazard causes harm by leading someone in a bad direction through the interaction of true knowledge with existing false beliefs or incomplete knowledge
  • Distraction and temptation hazards: Information can harm us by distracting us or presenting us with temptations
    • Humans are not perfectly rational
    • Humans do not have perfect self-control
    • Some information involuntarily draws our attention to some idea when we would prefer that we focus our minds elsewhere
    • A recovering alcoholic can be harmed by a detailed description of wine
    • In the future, virtual reality environments and informational hyper-stimuli might be as addicting as drugs
  • Role model hazard: we can be corrupted and deformed by exposure to bad role models
    • Even if we know a model is bad, we can still be influenced by it via prolonged exposure
    • Subjective well-being and even body mass are singnificantly influenced by peers
  • Biasing hazard: When we are already biased, we can be led further away from the truth by information that amplifies or triggers our biases
    • Cognitive biases can be aggravated by the provision of certain kinds of data
    • Overestimation of one's own abilities can be aggravated by a good performance on an easy task
    • Even knowledge of biases and logical fallacies can be harmful, because it gives the person useful counterarguments with which to rebut challenging facts
  • Debiasing hazard: when biases have individual or social benefits, harm can result from information that erodes those biases
    • Strong belief in our own abilities signals confidence and competence, making us more effective leaders
    • Information that undermines that belief can deprive us of those benefits
    • Possible that society benefits from excess individual risk-taking in some disciplines
      • The overestimation of the chances of success by inventors and entrepreneurs may have positive externalities for society as a whole
  • Neuropsychological hazard: Information might have negative effects on our psyches because of the particular ways in which our brains are structured, effects that would not arise in more "idealized" cognitive architectures
    • Neurological problems that arise from too much "cross-talk" between different parts of the brain
    • Photosensitive epilepsy
  • Information Burying Hazard: Irrelevant information can make relevant information harder to find, thereby increasing search costs for agents with limited computational resources
    • Steganography
    • Hiding incriminating evidence inside masses of trivial documents

Risks To Valuable States and Activities

  • Psychological reaction hazard: information can reduce well-being by causing sadness, disappointment or some other reaction the receiver
  • Belief constituted value hazard: if some component of well-being depends constitutively on epistemic or attentional states, then information that affects those states might thereby directly impact well-being
  • Disappointment hazard: Our emotional well-being can be adversely affected by the receipt of bad news
    • Example: mother on her deathbed, with her son fighting in a war
      • If the son is killed or injured there is a disappointment hazard to the mother
        • If she learns about her son's fate before she dies, she will spend her last days in despair
        • If she does not, she will die in peace
        • Mother faces severe disappointment hazard in this scenario
  • Spoiler hazard: Fun that depends on ignorance and suspense is at risk of being destroyed by premature disclosure of truth
  • Mindset hazard: Our basic attitude or mindset might change in undesirable ways as a consequence of exposure to information of certain kinds
    • Unwanted cynicism promoted by an excess of knowledge about the dark side of human affairs
    • Historical knowledge sapping artistic and cultural innovation
    • Scientific reductionism despoils life of its mystery and wonder
  • How do we distinguish belief constituted value hazard from psychological reaction hazard?
    • It might be valuable for someone to risk psychological reaction, because of broader values
    • One might hold that life lived in ignorance is a life made worse, even when that ignorance shields one from painful realities
    • One might also hold that there is some knowledge that makes a negative contribution to well-being
      • We might value innocence for its own sake
      • Privacy
      • We might want to remain ignorant of some details of our friends or our parents lives so that we can think about them in a more appropriate manner
        • TMI is an infohazard
  • Embarrassment hazard: We may suffer psychological distress or reputational damage as a result of embarrassing facts about ourselves being disclosed
    • Often similar to and take the form of signaling hazards
    • Combine elements of psychological reaction hazard, belief constituted value hazard, and competitiveness hazard
    • Self-esteem is not a wholly private matter, but is also a social signal that influences others opinions of us
    • Risk of embarrassment can suppress frank discussion
    • Embarrassments that affect reputation and brand names of corporations can cause billions of dollars in damage
    • In the Cold War, the prolongation of the Vietnam war (on the US side) and the Afghan war (on the Soviet side) were both due to the respective side not willing to suffer the embarrassment cost of admitting defeat

Risks from information technology systems

  • Information technology systems are vulnerable to unintentionally disruptive input sequences, or systems interactions as well as to attacks by determined hackers
  • Information system hazard: The behavior of some (non-human) information system can be adversely affected by some informational inputs or system interactions
    • Can be subdivided in various ways
    • Information infrastructure failure hazard: the risk that some information system will malfunction, either accidentally or as a result of cyber attack; and, as a consequence, the owners or users of the system maybe be harmed or inconvenience, or third parties whose welfare depends on the system may be harmed, or the malfunction might propagate through some dependent network, causing a wider distrubance
      • Most attention is given to information infrastructure failure hazard
    • Information infrastructure misuse hazard: Risk that some information system, while functioning according to specifications, will service some harmful purpose and will faciliatate the achievement of said purpose by providing useful information infrastructure
      • Example: government or private databases that collect large amounts of data about citizens might make it easier for a future dictator to gain and maintain control
      • Building such a database might, in addition, establish a norm that makes it easier for other, more harmful, governments to do the same thing
    • Robot hazard: Risks that derive substantially from the physical capabilities of a robotic system
      • If a Predator drone with armed missiles gets hacked or malfunctions, that's a robot hazard
    • Artificial intelligence hazard: Computer related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than specific properties of any actuators to which the system initially has access
      • A superintelligent AI, even if initially restricted to interacting with human gatekeepers via a text interface, might hack or talk its way out of confinement
      • The threat posed by a sufficiently advanced AI may depend more on its cognitive capabilities and its goal architecture than on the physical capabilities with which it is initially endowed

Risks From Development

  • Development hazard: Progress in some field of knowledge can lead to enhanced technological, organizational or economic capabilities, which can produce negative consequences
    • After Hiroshima and Nagasaki, the physicists of the Manhattan Project found themselves complicit in the deaths of over 200,000 people
    • Given the example of the Manhattan project, it is no longer morally viable to proceed with research without thinking about its potential consequences
      • Biotech
      • Nanotech
      • Surveillance systems
    • The broad and interdisciplinary nature of modern scientific advances means that even innocuous looking advances may have implications for development hazard

Discussion

  • The catalog of information hazards detailed above can help inform our choices by highlighting the sometimes subtle ways in which even true information can have harmful effects
  • In many cases, the best response to an infohazard is no response
    • Benefits of information so far outweigh the costs of information hazards that we still underinvest in information gathering
    • Ignorance carries dangers that are often greater than knowledge
  • Mitigation need not take the form of an active attempt to suppress information
    • Invest less in research in certain areas
    • Refrain from reading about spoilers by avoiding reviews and plot summaries
  • Sometimes an information hazard is caused by partial information, so the solution to the information hazard is more information, not less
  • Historically, policies that have restricted information have served special interests
  • At the same time, we should recognize that knowledge and information frequently have downsides
  • We should be more cognizant of which areas of knowledge should be promoted, which should be left fallow, and which should be actively impeded
  • Indeed, the discussion of information hazards itself can be a norm hazard, if it undermines the fragile norms allowing for truth-seeking and truth-reporting

The Hazard of Concealing Risk

  • Concealing information can produce risk
  • Man Made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility
  • Hiding information in disasters contributed to make them possible and hindered rescue and recovery
  • Book focuses mainly on technological disasters, such as Vajont Dam, Three Mile Island, Bhopal, Chernobyl, etc, but also covers financial disasters, military disasters, production failures, and concealment of product risk
  • In all cases, there was concealment going on at multiple levels
  • Many patterns of information concealment occur again and again
  • 5 major clusters of information concealment
    • External environment enticing concealment
    • Risk communication channels blocked
    • Internal ecology stimulating concealment or ignorance
    • Faulty risk assessment and knowledge management
    • People having personal incentives to conceal
  • Systemic problem - one or two of these factors can be counteracted by good risk management, but when you get more, the causes become much more difficult to deal with
  • Causes work to corrode the risk management ability of the organization
  • Once risks are hidden, it becomes much more difficult to manage them
  • However, risk concealment is something that can be counteracted
  • Apply model to some technologies they think show signs of risk concealment
    • Shale energy
    • GMOs
    • Debt and liabilities of US and China
  • Patterns of concealment don't predict imminent disaster, but will make things worse if/when a disaster occurs
  • Information concealment is not the cause of all disasters
    • Some disasters are due to exogenous shocks or truly unexpected failures of technology
    • However, risk concealment can make preparation brittle and recovery inefficient
  • No evidence to indicate that examined disasters were uniquely bad from a concealment perspective - a lot of time organizations and individuals just get away with concealing risk
  • Book is an important rejoinder to the concept of information hazards
    • Some information can be risky
    • But ignorance can be just as risky
  • Institutional secrecy is intended to contain information hazards, but can compartmentalize and block relevant information flows
  • A proper information hazard strategy needs to take into account concealment risk

The Wonderful Thing About Triggers

  • Scott likes trigger warnings
  • Trigger warnings aren't censorship
  • Opposite of censorship
    • Censorship says, "Read what we tell you"
    • Trigger warnings allow you to read what you want
    • Scott doesn't understand what censorship is - censorship isn't about telling people what they should or should not read, it's about suppressing ideas
  • We should give people relevant information and trust them to make their own decisions
  • Trigger warnings attempt to provide you with the information to make good free choices about your reading material
  • Analogy with book titles
    • We print the titles of books on the outsides
    • People can, and do judge books by their titles
    • Our decision to print titles on the outsides of books means that we care more about trusting people's judgement than denying people the ability to avoid things they don't want to read
  • "Beware he who would deny you access to information, for, in his heart, he dreams himself your master."
  • Trigger warnings allow us to fight censorship by arguing that those who chose to engage with our ideas do so with the full knowledge that they might find what we have to say offensive
  • People can misuse trigger warnings to avoid engaging with challenging ideas, but this is a problem any time you provide people with more information - they might use it the wrong way
  • However, people might also use trigger warnings to increase their ability to read challenging material - choose to engage with arguments that don't try to offend them
  • Do we, as a civilization, force people to be virtuous without their consent?
    • Not any more, which is the crux of so many "hot-button" disagreements
    • On topics like gay marriage, abortion, adultery, blue laws for alcohol, drug policy, and many other issues, society did (and to some extent, still does) force people to be virtuous
  • The strongest argument that Scott's heard against trigger warnings is that they increase politicization
    • Colleges put trigger warnings on everything that can offend liberals, but get outraged when conservatives ask for trigger warnings for things that offend them
    • The solution to this is to put trigger warnings in small print on the "bullshit page" - the page with the publisher and copyright information
      • This might be a solution for books, but what about all the other places where SJWs want trigger warnings, like class syllabi, blog posts, news articles, etc
      • Also, what is the set of triggers that have to warned about? SJWs have literally asked for trigger warnings on opinions like, 'There are only two genders'
    • Trigger warnings can be helpful, if used in good faith
      • That is exactly the problem though - the people who are calling for trigger warnings are not acting in good faith - they're trying to put yellow radiation trefoils on anything that opposes their political agenda
      • It's a motte-and-bailey argument: the bailey is SJWs going around putting trigger warnings on anything that's insufficiently leftist. But when challenged, they retreat to the motte of saying, "But we should warn people about graphic rape scenes, because that might trigger people's PTSD."
  • Opposing trigger warnings on slippery slope grounds just serves to discredit you, while being completely ineffective in the long run
    • Example: gay marriage
      • Conservatives said there was nothing inherently objectionable about gay marriage
      • Argued that it was the first step along a slippery slope to worse things
      • But when gay marriage passed, and society didn't take any further steps, conservatives were shown to be both ineffectual and wrong
      • Now even valid arguments on the basis of "family values" can be rejected, because people will pattern-match them to the arguments against gay marriage
  • The real problem that Scott has is with the argument that trigger warnings should be avoided in order to force people with PTSD to confront their triggers
    • You do not give people psychotherapy without their consent
    • Even if you can argue consent, people want to confront triggers at their own pace and on their own terms
  • My problem is that the word "triggered" has become totally devalued by social justice types on Tumblr
    • "Triggered", in the way that Scott uses it, means a strong reaction that can be harmful or even debilitating to a person
    • "Triggered", in the way that Tumblr SJWs use it means "mild discomfort or offense"
    • Scott doesn't seem to understand the role of weaponized weakness and performative victimhood as tactics used by the social justice movement in order to advance political goals
    • Unfortunately, calls of trigger warnings have become irretrievably associated with the tactics of weaponized vulnerability and performative victimhood, and thus, just like conservative opposition to gay marriage, SJW calls for trigger warnings have become discredited
    • /

Roko's Basilisk

  • Introduction
    • Roko's Basilisk is a thought experiment proposed in 2010 by the user Roko on the LessWrong community forum
    • Used ideas in decision theory to argue that a sufficiently powerful AI would have an incentive to torture anyone who imagined the agent, but didn't work to bring the agent into existence
    • Called a basilisk because merely hearing the argument would put you at risk of torture from this hypothetical agent
    • Argument was broadly rejected on LessWrong
      • A basilisk-like agent would have no incentive to follow through on its threats
      • Torturing people for past decisions would be a waste of resources, since once an agent is in existence, the probability of its existence is 1
      • Although there are acausal decision theories that allow entities to follow through on acausal threats, these require a large amount of shared information and trust, which does not apply in this case
    • Discussion of Roko's Basilisk was banned as part of a general site policy against spreading potential information hazards
      • Had the opposite of the intended effect
      • Outside websites began sharing information about Roko's Basilisk
      • People assumed that discussion had been banned because LessWrong users accepted the argument
      • Used as evidence to show that LessWrong users have unconventional and wrong-headed beliefs
  • Background
    • Roko's argument ties together Newcomblike problems in decision theory with normative uncertainty in moral philosophy
    • Example of a Newcomblike-problem: Prisoner's Dilemma
      • Each player prefers to defect individually while the other player cooperates
      • Each player prefers mutual cooperation over mutual defection
    • One of the basic problems in decision theory is that "rational" agents will end up defecting against each other, even though it would make both players better off to have a binding cooperation agreement
    • Extreme version of a prisoner's dilemma - playing against an identical copy of oneself
      • It's certain that both copies will play the same move - only choices are mutual cooperation or mutual defection
      • Causal decision theory endorses mutual defection
        • Assumes that agents' choices are independent
        • Regardless of what the other copy does, it is in this copy's best interest to defect
        • Since defection dominates in both scenarios for each agent, defection dominates
    • Eliezer Yudkowsky proposed an alternative to Causal Decision Theory, Timeless Decision Theory, that can achieve cooperation in prisoner's dilemmas, provided that each player knows that the other is running TDT
    • Wei Dai subsequently proposed a theory that outperforms both TDT and CDT, Updateless Decision Theory (UDT)
    • Interest in decision theory stems from AI control problem - how can we gain high confidence in AI agents' reasoning and decision-making, even if they've surpassed us in intelligence?
    • Without a full understanding of decision theory, we risk making AI agents whose behavior is difficult to model or erratic
    • AI Control Problem also raises moral philosophy questions: how can we specify the goals of an autonomous system in the face of uncertainty about what it is we actually want?
      • Hypothetical algorithm that could autonomously pursue human goals in a way compatible with moral progress: coherent extrapolated volition
    • Because of Eliezer's status as a founding member of LessWrong, AI theory and "acausal" decision theories have been repeatedly discussed
    • Roko's post was an attempt to use Yudkowsky's proposed decision theory to argue against his characterization of an ideal AI goal (coherent extrapolated volition)
  • Roko's post
    • If two TDT or UDT agents with common knowledge of each others' source code are separated in time, the later agent can seemingly blackmail the earlier agent
      • Earlier agent: Alice
      • Later agent: Bob
      • Bob's algorithm outputs things that alice likes if Alice leaves Bob a large sum of money, and things that Alice dislikes otherwise
      • Since Alice knows Bob's source code, she knows this fact about Bob, even though Bob doesn't exist yet
      • If Alice is certain that Bob will someday exist, then her knowledge of what Bob would do seems to force Alice to comply
    • CDT is immune to this
      • CDT agents assume that their decisions are independent
      • CDT Bob would not waste resources punishing a decision that has already happened
    • Roko proposed that a highly moral AI agent (one whose actions are perfectly consistent with coherent extrapolated volition) would want to be created as soon as possible
    • Such an AI agent would use acausal blackmail to give humans stronger incentives to create it
    • The AI agent would target in particular people who had thought about this argument, because they would have a better chance of mentally simulating the AI's source code
    • Conclusion: any AI agent that reasons like a utilitarian optimizing for humanity's coherently extrapolated values would be paradoxically detrimental to those values
    • Response from Eliezer
      • The AI agent would gain nothing from following through with its threats because it would be wasting resources punishing humanity for a decision that already had taken place
      • Moreover, the agent has an even better outcome: make you believe that it's going to torture you in the future, and then not expend resources on that
      • So, given that, why should we believe an basilisk-like agent?
    • Subsequent discussion of the basilisk post has had more to do with the moderator response to Roko's post, rather than on the specific merits of the argument
  • Topic moderation and response
    • Yudkowsky deleted Roko's post and the ensuing discussion
    • Yudkowsky rejected the idea that Roko's Basilisk could be considered a friendly AI in anyway, by asserting that even threatened torture would be contrary to humanity's coherent extroplated volition
    • The deletion and the apparently strong response to the basilisk post caused others to assume that LessWrong users took the threat of Roko's basilisk seriously
    • In addition, the ban prevented people from seeing the original argument, leading to a wealth of secondhand, sometimes distorted, interpretations of the argument
    • Gwern says that few LessWrong users took the Basilisk seriously, and that everyone seems to know who is affected by the Basilisk, despite not knowing any such people
      • Like the Band of Brothers quote: "It's funny how when you talk to people about it, everyone claims they heard it from someone who was there, and then when you go ask that person, they claim they heard it from someone who was there"
    • Eliezer claims to have deleted the post not because the post itself was an infohazard, but because there may be some variant of the idea of a basilisk that is a real infohazard
      • There is no upside from being exposed to Roko's Basilisk, so the probability of it being true is irrelevant
      • Was indignant that Roko had violated the basic ethical code for handling infohazards
  • Big-picture questions
    • Blackmail resistant decision theories
      • The general ability to cooperate in prisoners' dilemmas appears to be useful
      • Introducing more sophisticated forms of contracts to ensure cooperation appears to be a beneficial thing to do
      • At the same time, these contracts introduce new opportunities for blackmail
      • If an agent can pre-commit to following through on on a promise, even when following through is no longer in the agent's best interest, it can also pre-commit to following through on a costly threat
      • It appears that the best way to defeat this blackmail is to precommit to never giving in to any of the blackmailer's demands, even when there are short-term advantages to doing so
      • Stick to the action that is recommended by the most generally useful policy (which is what UDT advises)
        • UDT selects the best available mapping of observations to actions (policy) rather than the best available action
        • Avoids selecting a strategy that other agents will have an especially easy time manipulating
      • It has not been formally demonstrated that decision theories are vulnerable to blackmail, nor do we know in what circumstances a particular decision theory would be vulnerable
      • If TDT or UDT were vulnerable to blackmail, then this would suggest that they are not normatively optimal decision theories
    • Information hazards
      • David Langford coined the term "basilisk", in the infohazard sense, in the 1988 science fiction story BLIT
      • Roko's basilisk incident suggests that information that is deemed dangerous or taboo spreads more rapidly
      • Although Roko's basilisk was not harmful, real infohazards may spread in a similar way
      • Non-specialists spread the idea of Roko's basilisk without first investigating the risks or the benefits in any serious way
      • Someone in possession of an infohazard should exercise caution in visibly suppressing it
    • "Weirdness points"
      • Promoting or talking about too many nonstandard ideas makes it less likely that any one of those ideas will be taken seriously
      • If you promote too many weird ideas, a skeptical interlocutor will write you off as just being prone to weird ideas
      • On the other hand, promoting weird ideas can help form a community that is interested in those weird ideas, whereas associating with people who solely endorse conventional ideas can just alienate you when you do spend all your "weirdness points" on that one idea

2018-02-19 RRG Reading Notes

In Favor of Niceness, Community and Civilization:

  • In order to win, do we have to embrace "politics is the mindkiller" and "arguments are soldiers"?
  • After all, if we're about winning, then we should be willing to do whatever it takes to win
  • If a fight is important, be ready to fight nasty
  • If politics is war, why not use bullets, both real and rhetorical?
  • So why not use violence?
    • Most of this is derivable from Hobbes
    • Example: Protestants vs. Catholics
      • Start with outright war between Protestants and Catholics
      • This isn't sustainable - neither side can annihilate the other
      • So Protestants and Catholics compromise and agree to form a government
        • In the specific case, this is called the Good Friday Accords
        • In the general case, this is just civilization
      • Unfortunately, this just moves the conflict up a level - Protestants and Catholics are now using the government to try to sabotage each other
      • This is also not sustainable - the war is still going on, it's just going on at a slower pace
      • So, Protestants and Catholics agree not to use the government against each other
        • In the US, this is covered by the First Amendment
        • In the general case, this is known as liberalism
    • Every case in which two sides have agreed to lay down their weapons and abandon total war has corresponded to a huge increase in human flourishing
    • But why is this agreement a stable equilibrium?
  • 2 explanations for why people stop using violence
    • Reciprocal communitarianism
      • Probably how altruism evolved
      • Once a small successful community of collaborators running tit-for-tat starts, others have to either join the community or get outcompeted
    • "Divine Grace"
      • People successfully interact with people with opposing views all the time
      • Catholics and Protestants, Christians and Jews all interact with each other in "normal" society, without going at each others' throats
      • Reading ancient and medieval texts, there is nothing but honor among foes - honorable conduct among Greek and Roman warriors, codes of chivalry, etc
      • "Christmas Truce" between Triple Alliance and Triple Entente in World War 1
      • The problem with all of these examples is that they are the exception, rather than the norm
        • Codes of chivalry were often observed more in the breach
        • It's not actually clear how well Homer actually describes ancient combat
        • The Christmas Truce was a one-time thing; by 1915 there was already too much bad blood between front-line units in World War 1 to permit a repeat
  • Most useful social norms exist due to a combination of divine grace and reciprocal communitarianism
    • People lie, but not too much
      • It's very rare that someone makes up numbers out of whole cloth
      • What you usually see is (deliberate) misinterpretations of a particular statistic
      • People know that lying is wrong, and want to be able to hedge by saying, "Well, I didn't technically say something completely false."
  • Groups that are nice places to be attract members and groups that are actively hostile to new members lose them
  • The advantage of liberalism is that it fails gracefully
    • If it turns out that the group you're fighting against is not evil or immoral, liberalism at least allows you the consolation of having treated that group in a civil manner
    • This is opposed to the politics-as-war, where outgroups are persecuted, thus laying the groundwork for reverse persecution when the outgroup gains power
  • So why should we be worried when people on "our side" use unethical tactics to advance causes that we believe in
    • Those people are undermining the basis of our community
    • Heretics, not heathens
    • Making exceptions for particular outgroups is a great way to undermine the foundations of your community and ensure its collapse
  • Scott names the demiurge of liberalism, Elua, after the god from Kushiel's Avatar

Why The Culture Wins: An Appreciation of Iain M. Banks

  • The Culture is the evolutionary winner among all cultures
  • Banks, unlike other science fiction writers, considered how culture would evolve alongside technology
  • Banks' works don't fall into the normal science fiction fallacy of assuming that advanced technology won't have any cultural impact
  • Most far-future science fiction falls into a thinly veiled re-telling of Gibbons' Decline and Fall of the Roman Empire
  • Banks considers a future in which technological advancement has freed culture from all functional constraints
    • Culture becomes purely memetic
    • A culture is "functional" insofar as it contributes to the creation of material goods and services
    • In order for civilization to produce everything that it needs to produce, certain collective-action problems have to be overcome
    • Functional cultures help civilization overcome these collective action problems
  • Most cultures have a good fit with their environment
  • Cases where cultures do not have a good fit with their environment usually arise when the environment changes (due to political conquest, technological advancement, etc) faster than culture can adapt
  • Human history, so far, has been marked by pluralism among cultures
  • Cultures have, historically, competed with one another, with more some cultures becoming larger and more dominant and others fading away
    • Large facets of Roman culture endured long after the Roman Empire had fallen
    • Han culture continues to dominate China, not because the Han empire is still around, but because Han bureaucratic traditions still persist
  • Societies with strong institutions become wealthier and more powerful militarily
  • As a result, there has been convergence with respect to institutional structure
    • Societies everywhere accept market economies
    • Societies everywhere recognize the need for a relatively powerful bureaucratic state
    • This convergence is often falsely described as Westernization, when in reality it's the process of cultural adaptation to capitalism and bureaucracy
    • Cultures that are incompatible with capitalism and bureaucracy are outcompeted and extinguished
  • Because of this convergence, competition between cultures is becoming defunctionalized
    • When comparing the modern "hypercultures" of US, Europe and Japan, there is little to choose from, functionally
    • All of these cultures perform about equally well at providing security and material goods for their population
    • As a result, the sole basis of competition is memetics - the ability of the culture to reproduce itself
  • A meme doesn't necessarily have to offer its host any benefits to reproduce
    • Example: chain letter - doesn't provide any benefits, but has a halfway plausible story promising benefits, which causes its recipient to forward it on
    • Example: religion - religions imbue their members with missionary zeal, which allows them to spread, even though that process is costly for the actual missionaries
  • Comparing Confucianism and Christianity
    • Confucianism spread because of its functional qualities
      • One of the first drivers of state formation
      • Led to the creation of an extremely stable bureaucratic state and social structure
      • Spreads directly through the strength of the institutions that it's functionally related to
    • Christianity spread because of memetics
      • Viral properties
      • Much less successful at generating stable states
      • The properties of Christianity that allowed it to take over the Roman Empire explain its success in many non-Western countries
  • What happens when you take the process that generates modern hypercultures and iterate it forward for another three to four hundred years?
    • Cultures become completely defunctionalized
      • All the endemic problems of human society (war, crime, disease, etc.) have technological solutions
      • Scarcity no longer exists, so there is no longer an obligation for anyone to work
      • Important decisions are made by a benevolent technocracy of AIs
    • The culture that emerges from this process will be the most virulent culture, the one that's best at spreading by appealing to the tastes and sensibilities of humans
  • This is why Horza, the protagonist of Consider Phlebas dislikes The Culture
    • Horza is a member of the Idiran empire
    • Not an Idiran, but a member of an allied species
    • The Idirans are religious zealots, so why would anyone choose them over The Culture, which is all about peaceful coexistence?
    • The Idirans have a certain depth, or seriousness that is lacking in The Culture
  • Max Weber: Modernity produces "specialists without spirit, sensualists, without heart."
    • In the culture, the specialist roles have all been subsumed by AI
    • The primary appeal of The Culture is the promise of non-stop partying with unlimited sex and drugs
  • The problem with The Culture is that it provides no deeper meaning
  • However, this decadence should not be mistaken for weakness
    • Iridans thought that The Culture would crumple when faced with a determined assault
  • Culture has a branch, called Contact, which is specialized in interfering in the affairs of newly contacted species
    • Opposite of Star Trek's Prime Directive
    • Interfere in the affairs of newly discovered species to ensure that factions that share the values of The Culture win
    • Thus, the Culture ensures its perpetuation in the guise of ensuring that the "good guys" win
  • The Culture is the ultimate choice-oriented society, and as a result, it suffers from a crisis of meaning
  • What happens when work disappears, and everything turns into a hobby?
  • What happens when you can choose all aspects of your identity, including gender, knowledge, skills, psychology, etc? How do you define your identity?
  • The paradox of freedom is that choices lose meaning
  • Socities have two reactions to the paradox of freedom
    • Neotraditionalism - choose to embrace a tradtional identity and traditional roles
    • Affirm freedom itself as the sole meaningful value and work to bring that value to others
  • The latter urge is what defines The Culture
    • "Secular evangelism"
    • The Culture and the Idirans posed existential threats to each other, not because they could physically annihilate each other but because each side's victory would have undermined the very thing that gave meaning to the culture of the other
  • The Culture, on closer examination, is much like The Borg
  • Iain M. Banks' great trick was, in essence, to make us sympathetic to The Borg, and to suggest that modern liberal societies are fundamentally Borg-like

How The West Was Won

  • Bryan Caplan: A Hardy Weed: How Traditionalists Underestimate Western Civ
  • Argues that defenders of Western Civilization don't give Western Civilization enough credit
  • Western Civ manages to spread even in the face of determined opposition, not through war and conquest, but through persuasion
  • The problem is that Caplan isn't really talking about Western Civilization
    • Western Civilization is what existed before the Industrial Revolution
    • Consisted of traditional things like maypoles and copying Latin manuscripts
  • Analogy to "western medicine"
    • Western medicine is just medicine that has been proven to work
    • There's nothing culturally western about it
  • Western culture is no more related to the geographical west than Western medicine
    • Example: Coca Cola: Ethiopian bean mixed with a Columbian leaf, with lots of carbonated sugar water added
      • There's nothing inherently "western" about Coca Cola, it's just that it happened to be discovered by an American chemist first
      • If a Japanese or Arab chemist had discovered Coca Cola, it would have been just as delicious
    • Example: Gender norms
      • Modern "western" gender norms would be unrecognizable to someone like Cicero or St. Augustine
      • "Western" gender norms sprung up after the Industrial Revolution in order to facilitate the needs of industrial society
      • As other countries industrialize, they will adopt "Western" gender norms, not because they're becoming westernized, but because those norms are more efficient
  • As a result it's more appropriate to call "Western culture" universal culture
  • Culture is a set of useful environmental adaptations, coupled with memetic drift
  • Before the Industrial Revolution, the process of building a culture was long and slow and left plenty of time for local pecularities to develop
  • The Industrial Revolution caused such a rapid change, however, that the process of culture-building became qualitatively different
    • Frantic search for better adaptations in an environment that's changing faster than society can collectively understand it
    • Erasure of spatial distance
  • Places get inducted into the global universal culture based upon their participation in trade and modern capitalism
  • Universal culture is the only culture that can survive without censorship
    • It is the collection of the most competitive ideas and products
    • Coca Cola spreads because it tastes better
    • Egalitarian gender norms spread because they're more popular and likeable
  • The only reason universal culture hasn't achieved fixation is because of barriers to communication
    • Geography
    • Time
    • Censorship
  • Universal culture is the only culture that can survive high levels of immigration
    • Universal culture is adapted to work in diverse multicultural environments
    • Accomplishes this through social atomization - everyone does mostly their own thing and broader society provides some least-common-denominator functions
    • Because universal culture deals so well with diverse societies, people will increasingly default to universal culture in public whenever there's high levels of immigration
    • It's not that foreigners are assimilating into western culture, it's more that both foreigners and natives are assimilating into a new universal culture
  • Western culture is not the aggressor - the West is as much a victim of universal culture as every other locale
  • There is a certain level of hypocrisy in universalist culture
    • We're okay with small, far-away, or exotic groups trying to maintain their culture
    • But when an outgroup tries to maintain its culture, we treat their religion as superstition, and treat their desire to preserve their culture as xenophobia and racism
  • Conflating universal culture and Western culture legitimizes this double standard - people trying to defend western culture are more trying to defend it from univeral culture than they are trying to defend it from Western culture
  • Whatever we decide, we should be consistent about it
    • Either we say that universalist culture is better, and we support universalist culture's conquest of traditional cultures elsewhere, just as we support its conquest of traditionalist cultures here
    • Or we say that traditional cultures are better and allow more space for traditionalist cultures at home
  • So is universalist culture better or are traditionalist cultures better?
    • Traditionalist culture
      • Some studies show that traditional cultures tend to be more happy
      • Correlation between homogeneity and happiness
    • Universalist culture
      • Democracies tend to be happier
      • Complicated but positive relationship between national happiness and wealth
  • The main thing, though, is to abandon this notion that universal culture and western culture are one and the same

Why Do You Hate Elua

  • Elua (liberalism, universal culture, etc) is slowly consuming everything in its path
  • Moroever, Elua appears to be good
  • So why are people attempting so hard to fight Elua?
  • Because Elua can be reprogrammed
  • The machinery of universal culture is people, driven by human goals and values
  • Nationalism is what you get when the machinery of liberalism is reprogammed with traditionalist values

2018-02-12 RRG Notes

On The Seelie and Useelie Courts

  • There appear to be two kind of social reality
    • Type that fixates on dark manipulative aspects - unseelie
      • Brazenness
      • Manipulation
      • "Acceptance of the cesspool of human communication"
    • Type that fixates on light conversational, flow aspects - seelie
      • Niceness
      • Community
      • Civilization
      • Willfully blind to the concept that their passive moves have consequences
    • Neither side contains good people, but both contain good intentions
  • When Seelie and Unseelie meet, there's an implicit unacknowledged struggle
  • The stronger suffocates or stabs the weaker
  • There is a sensation of tongue-tiedness and a change in the conversational flow
  • Do you feel that you are seelie or unseelie
  • My Thoughts
    • Is it possible to be both?
    • I'm willing to be manipulative if it's necessary, but if the other person is willing to cooperate, I'm equally willing to cooperate in return

On Dangerous Technology

  • In the game Stellaris, there are technologies that are considered "dangerous technologies"
    • Researching these technologies is dangerous because
      • They anger other civilizations
      • They can provoke crises within one's own civilization
  • Mindhacking and trying weird things is relatively similar to researching dangerous technologies
    • Example: "sparkliness"
      • Mix of hypomania and introspection
      • Can be directed outwards
      • Combined with an understanding of narrative and social reality
      • Starts to feel like a real thing if other people start validating the intuitions fostered by this practice
      • Drawback: hypomania is pushed to full-blown mania and you lose touch with reality
    • Dangerous technologies are generally defined by high-variance interventions
      • Meditation can be a dangerous technology if you pursue it far enough
      • Some nootropics are dangerous technologies
    • The power of belief is an up-and coming dangerous technology
      • Placebo effect exists and you can do cool things with it
      • Belief in bulletproofing (i.e. bulletproofing charms made by traditional priests in Africa)
      • Conviction charisma - i.e. "reality distortion field"
  • Not all mindhacks are dangerous
    • Double cruxing
    • Developing normal charisma through practice
    • Various techniques for overcoming bias
    • Most "traditional" rationalist techniques are safe whereas "dangerous technologies" are mostly in the post-rationalist canon
  • Dangerous technologies are appealing because they create outcomes quickly without a lot of effort
  • The problem is that the outcome can be either good or bad
  • They're unproven enough that using them too blatantly tends to alienate the more grounded people around you
  • Discussion questions
    • How does one approach a risk/benefit analysis when one of the risks is going insane?
    • Are nondangerous technologies powerful/proven enough to be worth the additional effort?

On The Tangent Stack

  • How do you keep a conversation going and make it seem fun?
  • Strive to generate responses that give you things to hook into
  • These things are often little tangents in the other person's story or conversation
  • But instead of bringing up the tangent right away, remember it, and then ask a question about it when the conversation lulls
  • Example story
    • Friend gets on the wrong train
    • Ends up in the wrong train entirely
    • Gets off, lost and despairing
    • Goes to Waffle House to collect themselves
    • Approached by strange man in a trenchcoat, who offers tickets to the right destination
    • Tickets work out
    • Friend ends up at their destination only a day late
  • 5 example tangents
    • How did she learn to sleep that deeply on the train
    • What part of North Carolina did she end up in
    • How did she find a Waffle House so fast
    • How did the trenchcoat man make her feel
    • What was the actual destination like
  • All of these tangents can generate further tangents, which can be used to keep the conversation going even longer
  • Simple concept that plays to strengths of working memory
  • Not helpful when telling stories, as opposed to asking questions
  • Doesn't give you a way to wind down the conversation when it's time to leave
  • Discussion questions
    • How much of a tangent stack can you maintain
    • What does it feel like to have a tangent stack applied to you?
    • Do you ever have problems with too much conversational flow?

On Breaking The Script

  • Conversations often fall into scripts
    • Example: "Hi, how are you" "Fine, thank you"
    • Most conversation are script-based
    • Even if the words aren't literally scripted, the responses are often from a set of pre-determined categories
    • Scripts happen because it's socially risky and takes a lot of conversation to break out of a script
    • While scripts are comfortable, it can be profitable to break out of scripts
  • So when looking for a script breaker, what should you keep in mind?
    • Goal
    • Unsualness
    • Accessibility
    • Specificity
    • Audience
    • Playfulness
  • Goal
    • What are you actually after by asking a weird question and controlling the conversational frame?
    • Whenever you do anything deliberate in a conversation, keep in mind your goals
    • Generate tangents to fill your tangent stack
    • Learn about how the person thinks
    • By knowing your goal, you can efficiently turn initial responses into an enjoyable conversation
  • Unusualness
    • How do you make your question a little weird
    • A script-breaker should be unusual enough to not map to an existing question with a cached answer
    • If you were asked the question, would you be able to answer it instantly or would you need to think about it?
  • Accessibility
    • How do you make sure that the person can answer the question?
    • If the question is too esoteric, the other person won't be able to answer your question
  • Specificity
    • Is the question specific enough to be answerable?
    • If you make your question too broad, people will either spit out a rehearsed answer or will freeze up
    • Adding some constraints helps
  • Audience
    • Where are you trying to control the conversational frame?
    • Knowing who you're talking to, changes which questions you ask
    • Consider what other person would find most fun
  • Playfulness
    • Don't forget to have fun and seem fun
    • The point of script breaking is to be fun and spontaneous
    • If people think you're being strategic or attempting to gain an advantage, they'll refuse the question
  • Additional things to pay attention to
    • Delivery
    • Context
    • Consider the guidelines, but don't be tied by them

On The Nature of Hypnosis

  • The commonality between all depictions of hypnosis is focus
  • Hypnosis can be modeled as a focus hijack
    • You're taking someone's focus and directing in one direction, which leaves openings for suggestions to take hold
    • Computers and phones can be seen as a form of hypnosis - hypnotize you to keep clicking and scrolling
    • This view of hypnosis as focus hijack opnes up a lot of possibilities in terms of how to set the space for hypnosis, how to create inductions, and how to awaken
  • Inductions
    • Start even before you start talking about hypnosis
    • Starts with your subject being comfortable with you and being open to put in a trance
    • Any hesitance on the part of the subject is stealing your focus
    • Need absolute trust to allow focus to be yielded fully
    • Once the hypnosis conversation starts, go gradually, and build the structure of being put in a trance in their mind
    • Be clear and honest
    • Make sure they're physically comfortable
    • The actual induction is relatively trivial - give them someone to concentrate on and reinforce natural bodily responses
  • Awakeners
    • Opposite of inductions
    • Release someone's attention and allow it to become theirs again
    • Hypnosis can be seen as the opposite of meditation
      • Meditation is about managing your focus yourself
      • Hypnosis is about outsourcing your focus to someone else
    • A good awakener is gentle, slowly raising the subject from their trance
    • Awakeners work well to break screen trances as well
  • The model of hypnosis as focus hijack allows a deeper exploration of what attention is, and how it acts as a resource in modern society
  • This model removes much of the esoterica from hypnosis
    • No magic words
    • No scripts
    • Just guide the subject's focus
    • Don't make suggestions that increase uneasiness
    • Inductions: narrow focus and allow the subject to outsource attention
    • Awakeners: diffuse focus and give the subject their attention back
  • Discussion questions
    • How much does the focus hijack model resonate with your hypnotic experiences
    • If you have been a subject, do they resonate with your experience of a hypnotic trance
    • if you're a hypnotist, do they align with the conscious and subconscious decisions you make during a session?
    • What are the gaps and flaws in this model?

2018-02-05 RRG Notes

Politics Is The Mind-Killer

  • In the ancestral environment, politics was a matter of life and death
  • Being on the wrong side of a political argument could get you killed, and being on the right side could let you kill your hated rivals
  • If you want to make a point about science, don't choose examples from contemporary politics; if you have to choose, choose a historical example that isn't likely to cause controversy
  • Politics is an extension of war by other means
    • Arguments are soldiers
    • Once you know which side you're on, you're obligated to attack all arguments of the other side
    • You're obligated to support arguments of your side, regardless of how weak or flawed they are
  • Try to avoid attacking the other side deliberately with political examples
  • My thoughts
    • This is fairly straightforward, and probably one of Eliezer's better essays
    • That said, just because you're uininterested in politics, it doesn't mean that politics is uninterested in you
    • Knowing how to argue in the political arena is an important skill, and one that all too many rationalists have deliberately avoided cultivating, using this essay as their justification

Conflict vs. Mistake

  • Jacobite claims that Marxists assume that ideology will solve all the incentives problems and principal-agent issues that stand in the way of good government
    • This would explain why Marxist governments so often fail
  • Much of the debate in contemporary politics can be treated as a dichotomy between Mistake Theory and Conflict Theory
  • Mistake theorists treat politics as a science or a form of engineering, or medicine
    • State is diseased or broken
    • We need to figure out the best way to cure the disease or fix the state
    • Some ideas are effective, whereas other ideas either wouldn't help or would cause too many side-effects
    • View debate as essential - need to hear all views in order to understand the whole situation
      • Need to create an environment where truth can prevail, regardless of who is right or wrong on any given issue
    • View different sides as symmetrical - both sides include trustworthy experts and trolls, and feel about as strongly about the issue at hand
      • Only difference is which side turns out to be right about the matter at hand
    • Worry about the complicated and paradoxical effects of social engineering
      • Use paradoxes to prove that we can't trust our instincts about social engineering, and need to have lots of research and debate
    • Believe that you can solve social problems by increasing general intelligence
      • Make the people smart enough to choose the most intelligent politicians
      • Make the politicians smart enough to choose the wisest technocrats
      • Make the technocrats smart enough to choose the best policies
    • Views passion as suspect
      • Wrong people can be just as loud as correct people
      • All passion does is use pressure to introduce bias
    • Views free speech and open debate as the most vital things
    • View the content of ideas as more important than their origin
    • Think democracy gives too much power to the average person
    • Think that conflict theorists are making a mistake
      • If they were taught philosophy 101, they would see that forming mobs and smashing things are not the appropriate answer to social problems
  • Conflict theorists treat politics as war
    • Different blocs are at war with each other to claim resources for themselves and deny those resources to others
    • See debate as having a minor clarifying role at best - outcomes are decided by who has the most power in a situation, not who has the best arguments
    • Take asymmetry of sides as a first principle
      • Elites are few in number but have lots of power
      • People are many but powerless
      • The Elites try to sow dissent and confusion
      • The People must remain united in the face of these attempts so that their solidarity may overwhelm the Elites' material advantages
    • Conflict theorists see emphasis on paradoxical effects as a distraction - people presenting on paradoxical effects of social engineering are shills for the elites
    • Believe that the best way to save the world is to increase passion
      • Rich and powerful win because they work together effectively
      • Once the poor and powerless unite and stand up in the same way, they can have just as much power as the rich and powerful
      • Need activists to tell people which causes are important
    • Views intelligence as suspect
      • Sees intelligence as being mainly used to create sophistic arguments that justify existing inequalities
    • Views free speech and open debate as a way to allow the enemy to come in and spread their ideas
    • View the origin of ideas as more important than their content
    • Thinks the problem with democracy is that it's too easily coopted by elites
    • Think that mistake theorists are the enemy - if it weren't for mistake theorists shilling for the rich and powerful, social problems would already be solved
  • So now that we have this conflict vs. mistake distinction, what do we do with it?
    • Can draw further distinctions between mistake theorists and conflict theorists
      • Easy Mistake theorists believe that most of our problems come from really dumb making simple mistakes
      • Hard Mistake theorists believe that questions are complicated and require more data than we've been able to collect so far
      • Easy conflict theorists believe that some positions are good and others are evil
      • Hard conflict theorist believe that conflicts occur between basically comprehensible viewpoints
  • While conflict theory is probably a less helpful way to view the world than mistake theory, both can be true in places
  • If someone is a conflict theorist, you can't use mistake theory arguments to convince them
  • My thoughts
    • I like the distinction between conflict theory and mistake theory, but I'm not sure that the distinction can be turned into useful policy recommendations
    • I think the last part of the essay, where Scott attempts to do something with the theory, is by far the weakest
    • That said, not every distinction has to translate into immediate action recommendations - it's fine to notice things, even if you're not sure exactly what to do, or even if anything has to be done
    • I think the biggest argument in favor of conflict theory is free trade
      • Mistake theorists think that conflict theorists just don't understand comparative advantage and the economic benefits of trade - "if only they'd internalize the lessons from economics 101, there wouldn't be a controversy at all!"
      • Conflict theorists, on the other hand, recognize that aggregate output is greater under free trade
        • However those gains in aggregate output somehow magically accrue to other people, whether they're peasants in third world countries or plutocratic factory owners
        • So what good does it do the average middle class person to know that the aggregate output of the economy is greater, when none of those gains accrue to them?
        • Conflict theorists understand this and say, "Arguments for free trade are a scam - they are a plot to take your economic surplus and reallocate it to other people" which isn't entirely wrong

Completing The Countercultures

  • Countercultures of the 1960s-1980s took attention to boundaries as their central theme
    • Monist counterculture - 1960s youth movement - wanted to eliminate boundaries and level distinctions
    • Dualist counterculture - religious right - wanted to make all boundaries absolute
  • Meaningness suggests that oppositions between these mirror image stances can be resolved by more complete stances that correct metaphysical errors
    • So, Meaningness is nothing but Hegelism in disguise? Treat the two (counter)cultures as thesis and antithesis, and attempt to turn them into a synthesis?
  • Apply conceptual framework to two countercultural battlegrounds: gender and national borders
    • Both are about boundaries
    • Both boundaries are nebulous and patterned
    • Ideologues sway many people by claiming that they're not nebulous and patterned
    • Gender was the most important cultural issue in countercultural politics
    • War was the most important social issue
  • Boundaries are nebulous yet patterned
    • "Nebulosity" is the unstable, uncertain, fluid, complex and ill-defined nature of all meanings
    • These properties are unwelcome because the lack of solid ground makes it difficult to build a durable personal identity, social structure or political movement
    • Confused stances are defensive responses to nebulosity
    • Confused stances attempted to fixate meanings
    • Monism and dualism are confused stances concerning boundaries
      • Monism denies boundaries and distinctions
      • Dualism fixates them as perfectly sharp
    • Boundaries are generally nebulous, but they also represent real patterns, so monism and dualism are both wrong
    • When you're close to a boundary, it can become impossible to tell which side some items are on
    • Boundaries are also selectively permeable - some items pass through easily, whereas others are stopped
    • Monism and dualism deny this inherent complexity
    • They promise simplicity and clarity by hiding the variability and ambiguity of reality
    • However both monism and dualism are partly correct
      • Monism recognizes that boundaries are never absolute
      • Dualism recognizes that boundaries are important and should not be wished away
    • Complete stances recognize both nebulosity and pattern
    • The complete stance with respect to boundaries is "participation"
      • Recognition that boundaries are always both nebulous and patterned
      • Combines the insights of monism and dualism
    • The fundamental method for resolving confusion of meaning is to look for unacknowledged nebulosity
      • Look for unacknowledged nebulosity
      • Notice why it is unwanted
      • Watch how patterns of meaning are fixated and denied in order to avoid recognizing nebulosity
      • Work out what would be implied if nebulosity were acknowledged as inherent and unavoidable, but not a defect in the fundamental nature of reality
      • Fluid mode extends this method from the individual to the social and cultural level
    • When people are stressed by confusion, they retreat to simple, extreme views that they know are wrong, but which seem defensible in their absolutism
  • Gender
    • Second-wave feminism emerged during the countercultural era
      • Initially focused on workplace equality, and broadened into a general equality movement
      • Theme of equality resonated with monist counterculture
      • Denied the existence or legitimacy of any difference between male and female, sometimes even at the biological level
    • Symmetrically, dualist theorists insisted that men and women are properly, essentially, immutably and totally different
      • Society must, therefore, reflect and enforce the boundary between them
    • During the countercultural era, these extreme claims seemed somehow plausible
    • However, gender can't be wished away, nor is it an entirely hard-and-fast distinction
    • Sexes are different on average, but individuals span the range of variation
    • Some people don't fit neatly into either category
    • No essential characteristic that makes something definitely masculine or feminine
    • Most people are reasonably comfortable with the somewhat different expectations contemporary society and culture have for men and women
    • A minority finds these expectations burdensome
    • No one conforms to these expectatiosn perfectly or consistently
    • This common-sense understanding is at least implicitly accepted by a majority of people
    • The mingled ambiguity and definiteness of gender isn't a big problem for most people most of the time
    • Since the end of the countercultural era, subculturalism and atomization have further complicated the meanings of gender
      • Second-wave feminism split into numerous third-wave sects
      • These third wave sects took diverse stances on the metaphysics of gender, with contributions from the LGBTQ community
      • This led to atomized intersectional fourth-wave feminism, which has lost coherence and uses whatever contradictory subcultural ideologies are convenient in the moment
    • What is the fluid mode with regards to gender?
      • Gender manifests as a pattern of interaction between two specific people in specific situations at specific times
      • What counts as a masculine or feminine way of interacting is constantly renegotiated
      • Does not mean that this distinction is arbitrary
      • However, it is usually so routine as to go unnotices
      • Only when this routine breaks down that the nebulosity of gender comes momentarily into consciousness
      • While we are constantly aware of how our micro-scale behaviors will be interpreted according to macro-scale ideologies, we're never really governed by them
        • Broad ideologies ignore day-to-day realities
        • Are not specific enough to govern individual interactions
    • Because gender is patterned, we can never really be free of it
    • Because it is nebulous, we can never perfectly embody it
    • Between the two extremes, there is an open space, in which we can take a playful attitude towards choice
    • Monism and dualism obscure the practicalities of specific conflicts
    • Dropping monism and dualism would still leave plenty of room for disagreements, but those disagreements would have to be argued on specific, practical grounds, instead of abstract metaphysical ones
    • Some dualists point to biological differences, like the presence of a Y chromosome as being essential
      • But the Bible doesn't mention anything about chromosomes
      • There are some people with Y chromosomes who everyone believes to be female, because there's no indication, physical or mental, of masculinity
    • Some monists say that since there are no differences between men and women beyond those imposed by society, you are whatever gender you say you are and society has an obligation to treat you that way
      • But then what about someone like Rachel Dolezal, who claims to be black, even though she doesn't have any black ancestry and was born blue-eyed and blonde-haired
      • Race is even less biologically determined than gender, so if it's okay to someone to claim that they're a different biological gender from what they were born as, shouldn't it be equally okay for someone to claim to be a different race than what they were biologically born as?
    • It's reasonable to recognize that gender can't simply be wished away
    • On the other hand, it's also important to recognize that there is no reasonable fact about what sex anyone is
    • Someone who passes for a particular sex might as well be treated as being that sex for most purposes
    • It would be helpful if we could restore the public/private boundary that the countercultures destroyed, and then agree that gender is a private matter
  • Sovereignty, borders and war
    • The Westphalian model of a state is the epitome of dualism
      • Holds that there are perfectly defined permanent borders between states
      • Every square inch of land is part of one and exactly one state
      • The government of a state holds sway uniformly within its borders
      • The government of a state has no right to to exert any influence beyond its borders
    • This is a highly unnatural configuration
      • In older eras, borders were vague and shifting
      • The sovereign's rule was absolute in the capital, but faded with distance
      • The main job of a king was to meddle in the affairs of neighboring kindgoms, which led to wars and border adjustments
    • Westphalian system was invented to prevent war
    • However, Westphalian sovereignty laid the foundations for World Wars 1 and 2
    • Monist approach to borders
      • Eliminate national boundaries
      • Wars occur between countries
      • No countries = no wars
    • However, countries and borders can't be wished away
    • But, that said, borders aren't hard-and-fast divisions
      • Only North Korea today even maintains the pretense of total isolation
    • At the end of the countercultural era, diplomats and international institutions quietly revised the system of international relations to reflect the nebulosity of borders
      • European Union develops as a model for blurred sovereignty, with extant, but permeable borders
      • World Trade Organization increases the permeability and complex selectivity of borders
      • Rwandan and Bosnian genocides changed the minds of many anti-war leftists and established the principle that great powers have both the right and the responsibility to intervene in the internal affairs of sovereign states
      • More recently, failures in the middle east have convinced many dualists that many wars cannot be won by military force alone
    • The only workable questions today concern the specific pragmatics of how borders operate
      • Which peoples, goods, services, monies and armies are allowed to cross and for what reasons?
  • My thoughts
    • Gender
      • With regards to gender, Chapman is being stupidly wishful
      • He's saying, essentially, "Why can't we just all get along and let people be whatever gender they claim, even if we disapprove in private?
      • The problem is that modern society has a notion that certain genders and ethnicities are allowed to make priority claims for certain scarce resources (such as college scholarships)
      • In such scenario, it makes sense to closely examine whether people are "truly" of their claimed race or gender, to ensure that only the people who are "deserving" of priority claims are making priority claims
    • Borders and war
      • Chapman is making some bold historical claims (the main job of a king was to meddle in the affairs of neighboring kindgoms) without a shred of evidence to actually back up his claims
      • I don't even agree with his claim that the Westphalian system was created to prevent war
        • The Westphalian system was created to prevent the sort of endless religious war that led to societal devastation
        • The Westphalian system was an acknowledgement by Reformation-era kings and princes that, at the end of the day, it's better to have something to rule over, than fight until everything is destroyed
        • I don't think anyone would have claimed that a Westphalian system would be end of all wars
        • It was seen as a truce between two forces, Protestantism and Catholicism, that had fought each other to exhaustion
      • I think responsibility to protect R2P has been a disaster for international relations
        • R2P is impossible to enforce, and impossible to apply correctly
        • Almost any intervention can be justified on R2P grounds
        • Responsibility to Protect places an undue burden on Great Powers, and simultaneously takes sovereignty away from the people of less powerful countries and gives that sovereignty to Great Powers
        • Responsibility to Protect is what got us the intervention in Libya - yes, a massacre was prevented… at the cost of wrecking the country. Is the average Libyan today better off than they were under Qaddafi's rule?

Counterculture War

  • Both sides in the counterculture war think they're losing
  • They're both correct - they lost decades ago, and we're living in the wreckage of these countercultures
  • The left and right of American politics are descendants from the monist and dualist countercultures of the 1960s-1980s
  • We are doing politics wrong
    • Politics is supposed to be the way we deal with vast problems and impending catastrophes
    • Only now, politics is causing those problems, rather than solving them
    • Democracy, by definition, isn't working when most people disapprove of the government
    • The major parties, ostensibly representing monist and dualist value systems are both considered to actually represent little more than the interests of their corporate donors
    • Media coverage of politics makes everything worse, deliberately, in order to drive engagement and ratings
    • This is not just true of the US, but also of the world - extremist parties are gaining ground everywhere
    • The public desires fundamental change, not necessarily extremism
    • The current state of affairs has been good for the ruling class, both politicians and plutocrats
      • Much easier to cut backroom deals when the political debate is overwhelmed by hot-button social issues
    • Much of global macroeconomic policy has been run for the benefit of the financial industry at the cost of everyone else
      • This persists because macroeconomic policy isn't about "values", so therefore it's "not political"
  • Baby Boomer Bafflement
    • The culture war persists because most Baby Boomers do not understand why their countercultures failed
    • Many participants have a wistful certainty that the counterculture of their youth will rise up again, and will replace the current mainstream
    • Both sides resent the other as the apparent explanation for their own counterculture's failure
    • One of the reasons the culture war has heated up over the past few years is because the Baby Boomers are realizing that they're soon going to age out of politics and this is their last chance to influence the cultural consensus
    • Maybe understanding that opposition from the other tribe was not the reason for the failure can help overcome polarization
      • Countercultures failed because the majority did not agree with them
      • The majority rejected countercultures because they were plainly wrong about many things
      • It would help to understand how younger generations relate to meaningness - some of your main issues are complete nonissues for them
      • Let go of the sacred myths of your tribe
      • Much of what you fight about is symbolic, not substantive - advocacy is not about issues but about establishing tribal identity
      • If you understand what you disagree about, you can find pragmatic compromises, instead of trying to demand total victory
  • Let Go Of The Sacred Myths of Your Tribe
    • Both countercultures were claims about the ultimate truth of everything that explains all meanings
    • Both countercultures were attempts to rescue eternalism from the threat of nihilism
    • Counterculture eternalisms function much like religion, even when they're non-theistic
    • Some of the hardest fought political battles are not even so much about "values" as they are over symbols
      • Flag burning
      • Harambe
    • Any issue that gets turned into a tribal/political shibboleth is invariably distorted by its role as such
      • Abortion
      • Gun control
      • Keystone XL
    • Both sides know that the other side's eternalism is wrong
    • But secretly, they know that their own eternalism is wrong too
    • The way to let go of these ideologies is to learn meaningness
    • Moreover, people stuck in the countercultural mode of understand the the world don't even comprehend the problems that later generations face
  • Why are THOSE PEOPLE so awful?
    • During the countercultural era, politics was about substantial social issues and genuine differences in values
    • Now, culture war is mostly about identity and status
    • Most politics today is ritual posturing and intra-tribal communication, rather than engaging between tribes
    • The question is about who is going to win, not how can we change society for the better
    • The moralization of politics has been a disaster
      • Ensures that compromise is impossible
      • How can you compromise when the other side is evil?
      • Moreover, even if you do compromise, how can you trust the other side to hold true to the compromise?
    • The sense of doom among both tribes is correct, but not because the other tribe is about to win
    • Both sides are doomed, because future generations largely don't care about their conflicts
  • Disentangling the culture war
    • Both sides recognize the culture war is harmful and should, at some level, stop
    • On the other hand, the culture war feels like its about sacred values, and therefore not amenable to compromise
    • Progress has to come from a better understanding about what both sides actually care about
    • We need to disentangle morality from politics
      • Better understand the functions of morality
    • Differences in values are much smaller than people think
    • Most supposed conflicts in fundamental values are actually disagreements about concrete issues
    • Arnold Kling - Three Languages of Politics
      • Progressives are primarily concerned with oppression
      • Conservatives are primarily concerned with civilization vs. barbarism
      • Libertarians are primarily concerned with freedom vs. coercion
      • As a result, all three groups talk past one another, and no one hears arguments from the other two groups
    • All three of the axes above are somewhat orthogonal - it's possible to to minimize oppression, maximize civilization, while limiting coercion
    • Empirical studies suggest that opposing political groups can come to understand each other if they learn to talk in terms of the other side's preferred fundamental values
    • Moreover, they can change the other side's mind by using that language
    • Few people today are willing or able to switch moral languages
    • More people passing the ideological Turing Test would go a long way to enabling more compromise
    • Also, we should ask people questions about how they think a particular policy would work, rather than whether they think it's right or not
    • Moreover we should recognize when people's actual personal practice differs from their political ideology and use that as a means to drive compromises
      • Example: upper-middle-class liberal families often embody conservative values far better than lower class conservative families
        • Less sexual promiscuity
        • More economic stability
      • Maybe the upper middle class could preach what it practices and understand that the rhetoric of sexual freedom is actually harmful for lower classes
  • My thoughts
    • Again with the historical ignorance! Galleons were actually quite good boats, for their time period. You try designing a craft to haul large amounts of cargo across the Atlantic from first principles!
    • Doing politics wrong
      • Has politics ever done well at solving problems and dealing with catastrophes? It seems to me that just barely dealing with catastrophes while occasionally causing bigger problems is the way politics has always worked
      • Oh my god, for the last fucking time, AMERICA IS. NOT. A. DEMOCRACY. It's not even really a republic. It's this weird hybrid federal system that started out as being mostly a republic but had democratic elements grafted onto it by the Progressive movement
      • The "only 20% of Americans approve of Congress" statistic is bullshit math. Yes, less than 20% of Americans approve of Congress as a whole, but the vast majority of Americans approve of their individual representative. Paul Ryan and Keith Ellison both get elected with >50% of the vote
      • With regards to extremist parties gaining ground everywhere, that's both true and not true. Yes, AfD won more votes than they've ever received in the past, but on the other hand, Le Pen flopped hard in France
      • The public may want fundamental change, but not extremism, but Chapman conveniently ignores that extremism is how you get fundamental change - moderates, by definition, want to continue the status quo
      • With regards to backroom deals, again, Chapman is factually wrong and, as a result, gets the argument backwards. It is harder than ever before to cut backroom deals. Legislation is more heavily scrutinized than ever. In fact, much of our current gridlock results from changes intended to limit so-called "pork barrel" spending. As it turns out, bacon grease is a great lubricant, and without it, the gears of government seize up
    • Baby Boomer Bafflement
      • Literally any time you argue on the basis of generational cohorts, you're on really shaky ground, and your argument is automatically suspect
      • There is as much diversity inside a so-called "generation" as there is between generations
      • With regards to the culture wars heating up, I'm not sure they actually are. I don't think the culture wars of today are hotter than the culture wars of the 1960s, much less the 1860s
      • The rest of this is literally just a long-winded version of, "Why can't we just sing kumbayah and hold hands?"
    • Let Go of the Sacred Myths of Your Tribe
      • What is Chapman even talking about here? He's drifted off into his own weird abstractions, where literally each word has multiple five-figure word counts backing it up
      • Literally worse than reading postmodernist philosophers - even Derrida was restricted to footnotes
    • Let Go of the Sacred Myths Of Your Tribe
      • The leftist example of an issue that has become distorted by its role as a political shibboleth is Black Lives Matter
      • Or alternatively, intersectionality of oppression
      • The fact that Chapman can't think of any such issues just betrays his own political leanings
      • I just don't buy this notion that Chapman has that people (even subconsciously) think that their own eternalisms are wrong
        • From talking with and reading committed ideologues, both on the conservative and liberal side, I think they're quite convinced that their ideology is not only not-bankrupt, but actually flush with cash
        • This notion of people secretly understanding that both ideologies are falling apart does not square with the reality of people who believe in said ideologies
    • Why are THOSE PEOPLE so awful?
      • I like how Chapman thinks Millenials all hold hands and sing kumbayah - it's cute, in a way
    • Disentangling the culture war
      • Repeatedly, Chapman has stated that if only both sides knew what they were actually fighting for, they would be able to compromise
      • I am not at all sure about this
        • Example: Abortion:
          • If you think a fetus is morally a human being, then the killing of a fetus, for whatever reason, is an immoral act
          • It might be necessary, in a trolley-problem sense - i.e. taking one life so that both mother and fetus don't die
          • But to take the life of a fetus when it is not necessary to do so, is in this moral system nothing short of state sanctioned murder
          • I think this puts you in opposition to those who state that abortion is a matter of "choice", and this opposition cannot be fixed by merely better understanding what you're fighting about
      • He says that most disagreements about fundamental values are disagreements about concrete issues, but there's no way to disentangle the two - fundamental values inform your stance on concrete issues
        • Insofar as government is about making meta-level decisions about the system that allocates scarce resources to people and groups, the abstract fundamental values decide who has moral priority to make claims on those scarce resources
        • This is really important, and cannot be papered over by saying, "Oh if only both sides truly understood what they were fighting for, they'd be able to compromise

Are Your Enemies Innately Evil

  • Most people don't see themselves as evil
  • The enemy's story, seen from the enemy's point of view, isn't going make the enemy look bad
  • However, because politics is the mind-killer, it's difficult to construe the enemy's true motivations without making it seem like you're defending the enemy
  • If seeing the world from the other side makes you feel sad rather than righteous, then you might be seeing the world as it truly is
  • This doesn't mean that your enemies beliefs are true, or right
  • It just means that they were doing the best for what they believe, just as you are doing the best for what you believe
  • There is no rule that says that there has to be an option that isn't tragic in some way
  • My Thoughts
    • Yep, agree with pretty much everything in this essay

2018-01-29 RRG Notes

Different Worlds

  • Scott realizes that his experience with psychotherapy isn't anything like those of his colleagues
    • Scott's patients gave calm and considered analyses of their problems
    • His colleague's patients all had dramatic emotional breakdowns
    • Scott's supervisor noted that he seemed to be uncomfortable with dramatic expressions of emotion, even though he was actively trying to hide that fact
    • Scott was able to turn this around into a reputation for being able to deal successfully with really difficult patients who have a lot of emotional breakdowns
    • This ability of Scott's has been described as a "niceness field"
    • This means that Scott's lack of success with psychodynamic therapies might be due more to Scott's own personality than to the limitations of psychodynamic therapy itself
  • Paranoia and Williams Syndrome
    • Paranoia is a common symptom of a lot of psychiatric disorders, most notably schizophrenia
      • The troubling thing about paranoia is how gradual it is
      • Instead of thinking the CIA is after you with mind-control rays, you'll just interpret ambiguous social signals a bit more negatively
      • This can lead to a self-reinforcing feedback loop, as the person becomes more and more standoffish in response to perceived slights from others
    • Williams Syndrome is the opposite of paranoia
      • People with Williams Syndrome are are "pathologically trusting"
      • Literally incapable of distrust
      • Williams Syndrome is usually, but not always coupled with mental retardation
      • However, IQ doesn't seem to have much of an impact on William's Syndrome - it seems like threat detection is an automated process, not well controlled by conscious analysis
    • Psychiatric disorders are often just the extremes of normal human variation
      • For every person who is diagnosed autistic, there are a dozen people who are awkward and weird
      • For every intellectually disabled person, there are a dozen that are just kind of slow
      • Maybe for every person diagnosed with Williams Syndrome, there are a dozen that are just more "trusting" than others
    • Our sense data is underdetermined
      • Each data point that our senses receive can be interpreted in multiple ways
      • This is especially true of social cues
      • Most people are able to navigate this ambiguity with context, i.e. priors
      • However, these priors can vary from person to person
        • Human society permits quite a lot of variation before we decide that someone is so off-base that they need to be excluded
      • Just as there's a spectrum from smart to dumb or a spectrum from introverted to extraverted, there may be a spectrum from completely trusting to completely paranoid
  • Bubbles
    • 46% of Americans are young-earth creationists
    • However, even though Scott isn't selecting friends on the basis of politics, religion or class, he has approximately zero friends who are young-earth creationists
      • Maybe instead of Scott excluding young-earth creationists, the young-earth creationists are excluding Scott - if it's their policy to not make friends with people who don't believe in young-earth creationism, then it's possible for them to end up in a bubble, even though people outside the bubble aren't necessarily excluding them
    • Some other bubbles that Scott lives in:
      • Transgender - people in Scott's circle of friends are 20x as likely to be transgender as the general population
      • 2x as many Asians as the general population
      • Half has many African Americans as the general population
      • Depression, OCD and autism are high
      • Drug addiction and alcoholism are low
      • Programmers overrepresented at 10x the Bay Area average
    • None of these bubbles were intentionally created
    • Moreover, some of these bubbles have persisted in the face of conscious efforts to pop them
    • This goes double for relationships - even though Scott doesn't think of himself as having a "type" all of the people he's dated have been similar in ways that he didn't expect when he first met them
    • This bubble theory is something that Scott thinks about when he meets serial abuse victims
      • Serial abuse victims are people who have been abused by multiple people in a row
      • Often abused by the people they to go to seek relief from the abuse
      • Offensive explanation: seek out abusers because for some reason they've internalized a model that defines abusive relationships as "correct"
      • While this may be true of some victims, it doesn't seem to be true of many
        • Go to great lengths to avoid abusers, but it doesn't seem to matter
        • In the same way that Scott finds himself in a bubble of transgender programmers, these people might find themselves in a bubble of abusers
  • Discrimination
    • Some women in the tech. industry experience a constant litany of harassment and discrimination, whereas other women go their entire careers without experiencing a single harassment event
    • There doesn't seem to be any correlation between industries, companies or physical attractiveness
    • Given the baseline rates of discrimination reported by others, it can't be just luck for some of these people to go their entire lives without experiencing any discrimination at all
  • These two forces, self-selected bubbles and the ambiguity of social cues combine to create different worlds for different people
    • People unconsciously self-select into bubbles
    • People vary in how they perceive social interactions
      • Discrimination is rarely as blatant as people being called out for their race or gender directly
      • There is usually some room for interpretation ("Was I being discriminated against there?") which means that different people will perceive discriminatory experiences differently
    • Are people basically good or basically evil?
      • Some people say that the world is full of hypocritical backstabbers
      • Others say the world is full of basically decent people who are hampered by communications difficulties and differences in values
      • Both groups are basically correct, because they just see different slices of the world
    • This applies on all sorts of axes, not just good/evil
      • Are people basically rational or basically emotional?
      • Are people welcoming of outsiders or shunning of outsiders?
      • Etc
    • The concept of "privilege" gets part of the way to capturing these differences of experience, but privilege has the limitation of insisting that these differences have to line up along predefined categories like race or class
    • Knowing that someone is living in a different world from you can go a long way towards making their behaviors more comprehensible

The Narrative Fallacy and What You Can Do About It

  • The Narrative Fallacy
    • A typical biography starts by describing the subject's younger life and tries to show how the young person was an early version of the person that they would become
    • Steve Jobs
      • Biographers play up the fact that he was adopted and imply that this led to a need for him to prove himself
    • Nassim Taleb
      • Describes how a professor who had read his earlier book ascribed his ability to separate cause and effect to his growing up in an Eastern Orthodox society
    • The problem is that both of these narratives are contradicted by their subjects
      • Steve Jobs actively denied that his being adopted had anything to do with his later success
      • Nassim Taleb looked at others in the financial industry who came from the same background as him, and found that none of them become skeptical empricists
    • The narrative fallacy exists because of a biological problem
      • Too much sensory information to process events independently
      • We have to put things in order so that we can process the world around us
      • The world does not make sense without cause and effect
    • While our tendency to order the world into narratives works well in general, sometimes it causes us to make errors
  • The problem with narrative is that it lures us into believing we can explain the past through cause-and-effect when we hear a story that supports our prior beliefs
    • Example: sports
      • Every profile of an athlete has roughly the same form
        • Natural gift for the sport
        • Parents or coaches that pushed them to strive for excellence
        • Hard work ethic
        • Some kind of adversity of impactful life-event
      • But we don't stop to ask ourselves why this person succeeded when the thousands of other people who have the same backgrounds failed
    • Narratives cause us to miss the influences of luck and timing
    • Narratives also cause us to ignore the mathematical rules of probability
      • Nassim Taleb talks about the example of a good detective novel making it seem like every suspect must be the criminal, right up until the final reveal
      • Kahneman talks about how people are willing to ignore the fact two things together must be less likely than one thing alone if the two things together fit a pre-packaged narrative
    • Narratives also ignore regression to the mean
      • All stories of success have a fair amount of luck in them
      • Eventually this luck runs out and the person or business reverts to the mean
      • This doesn't mean that they're any worse, only that they're not as lucky as they used to be
    • The problem with narratives is that we make them predictive and by doing so we make them more real than they actually are
  • A close cousin of the narrative fallacy is the reason-respecting tendency
    • People are willing to comply with those who give reasons for those orders, even when those reasons are meaningless or irrational
    • Example: people are able to jump to the head of the line at a copy machine by simply stating that they have to make copies, even though everyone else in the line has exactly the same reason
    • This is because reasons allow us to build narratives
    • This is why teaching that gives reasons for facts is so much more effective than teaching that asks us to memorize the bare facts themselves
    • This means that the best teaching, learning and storytelling methods (those that involve reasons and narrative) can also cause us to make our worst mistakes
  • So how do we help ourselves out of this quagmire?
    • Become aware of the problem
      • The key question to ask is, "Out of the population of X subject to the same initial conditions, how many turned out similarly to Y?"
      • "What hard-to-measure causes may have played a role?"
    • Modern scientific thought is built on top of efforts to solve this problem
      • The entire notion of a hypothesis comes from the fact that people recognized that simple narrative explanations were not sufficient to explain the world
      • Narratives have to be experimentally tested before they can be accepted as true cause-and-effect relationships
    • Another question we can ask ourselves is, "Of the population not subject to initial conditions X, how many ended up with the results of Y?"
      • Which basketball players had intact families, easy childhoods and ended up in the NBA anyway?
      • Which corporations lacked the traits talked about in business books but ended up successful anyway
    • We can also reduce our vulnerability to the narrative fallacy just by reducing the number of narratives we consume
      • Stop watching TV news
      • Be skeptical of biographies, memoirs and personal histories
      • Be careful of writers who claim to be writing facts, but are talented at painting a narrative (Malcolm Gladwell, Thomas Friedman)
    • We can reduce the power of narrative in our own lives by keeping journals
      • Whenever you're about to make a risky or uncertain decision, write down exactly why you're making that decision
      • When the decision eventually succeeds or fails, you can then look back at that document and evaluate your decision-making rather than coming up with a convenient narrative that explains why success or failure was inevitable
    • When searching for truth, favor experimentation over storytelling

What Universal Human Experiences Are You Missing Without Realizing It?

  • Some people just don't have visual imaginations
    • Assumed that when other people were talking about visualizing objects, they were speaking metaphorically
    • Got so good at talking about mental experiences as if they were visual that people with visual imaginations thought that they were having visual experiences
    • Only when Galton actually surveyed people did we find out that there is in fact a broad variation in people's ability to form mental imagery
  • Some people don't have the ability to smell (anosmia)
    • Can go for years without realizing that they don't have the ability
    • Often realize that they're not able to smell only when they're asked specifically about smells in great detail
  • So what other "fundamental" experiences are people missing out on
    • Asexuality - for most people, sex isn't gross or weird
    • Emotional blunting - Scott may not have had emotions for about 5 years when he was on SSRIs
      • Thought that everyone else was just being dramatic and overexuberant
      • Even when he noticed himself not having emotions, he dismissed the fact
      • Only learned later that emotional blunting is a common side effect of SSRIs
    • Passion for music
      • Scott doesn't really enjoy jazz - at best gets some kind of half-hearted feeling that he he could snap his fingers to the beat if he really tried
      • Meanwhile his brother feel in love with jazz and is now a professional jazz musician

Why You're Stuck In A Narrative

  • The narrative fallacy is our tendency to turn everything into a story
  • Unfortunately the real world has very few examples of linear chains of cause and effect
  • Most outcomes are probabilistic, direct causation is rare, and events are complex and interrelated
  • Our brains are engines designed to analyze the environment, pick out important parts and use those to extrapolate
  • In the ancestral environment, simple linear extrapolation was "good enough"
  • Unfortunately, the world is much more complex today
  • The ability to simplify, cluster and chain ideas is what allows us to get away with a relatively small working memory and a slow neuron firing speed
  • This narrative fallacy shows up in a number of lower level biases
    • Availability heuristic - we make predictions based upon what we find easiest to remember - often this is what has the most compelling narrative attached to it
    • Hindsight bias - past events "obviously" and "inevitably" cause future ones
    • Consistency bias - we reinterpret past events and actions to be consistent with new information
    • Confirmation bias - we only look for data to support the narrative conclusions we've already arrived at
  • That said, we need narrative in order to have a single coherent self
    • When people have damage to the frontal lobe and lose the ability to process higher-order input, they lose the ability to organize their lives and actions
    • In the extreme case, they do not speak unless spoken to and do not move unless very hungry
    • People with damage to other regions of the brain lose specific abilities, but remain the same person otherwise
    • People who lose the ability to construct narratives lose their selves
  • In the other extreme, narcissists over narrativize their lives, and make everything about themselves
  • So what should we do about this?
    • Make conjectures and run experiments
    • Force beliefs to be falsifiable
    • Make beliefs pay rent in anticipated experiences

2018-01-22 RRG Notes

The Craft and the Community: A Post-Mortem and Resurrection

Preface:

  • The most comprehensive list of criticisms of the rationality community
  • Not a rejection of the group
  • Need to create a shared understanding of problems in the group so that group members can work on fixing them
  • Most people are aware that things aren't quite right, but don't really know what's wrong or how to fix it
  • Most of the disagreements will come down to disagreements over the size, scope and urgency of fixing the problems
  • This essay prioritizes clarity over civility - may rub some people the wrong way

Introduction

  • It's been almost 10 years since the publication of The Craft and the Community
  • So why hasn't the rationality community been more successful
  • What have we actually accomplished in 10 years?
  • It's slightly horrifying that people are claiming that the best outcome of the rationality community is "interesting conversations at dinner parties"
    • Well, the problem here is that "rationality" is so ill defined, I'm not sure it can be something that one is successful at
    • What does a successful rationalist look like? Eliezer claims that a successful rationalist "wins", but there are plenty of people who win without using any sort of rationality skills
  • Eventually what has happened is that the people actually interested in instrumental rationality have gotten tired of the community, left, and accomplished other things
    • Arguably, this is exactly what should happen. Maybe the rationality community should not consider itself a community, but more like a training ground

Post-Mortem: What Went Wrong

  • It seems somewhat surprising that a community so full of potential has acheived so little towards its goals when looked at as a group
    • Is our community actually full of potential? I don't think we're actually higher potential than any other random group of West Coast smart people
  • While there have been exceptional people, the median person appears to be as successful as they would have been if they had not discovered the rationality community
    • This isn't exactly true of the Seattle rationality community - there are people here who have been motivated to change towards more successful life paths after their involvement in the rationality community here
    • Also, I disagree that the median person here is as successful as they would have been if they hadn't joined the rationality community
      • In my opinion, the rationality community is full of people who are at least a little bit "broken" from the perspective of broader society
      • If the rationality community can make those people as successful as the median person in broader society, then that counts as winning, even if it doesn't produce the sort of outcomes Bendini is looking for
  • Why hasn't the rationalist community been unable to beat the "control group" of wider society?
  • Bendini thinks that there are three inter-related groups of causes:
    • Demographics
    • Environment
    • Culture

Problems and Causal Factors

  • Demographics: Background Selection Effects

    • The Sequences disproportionately attracted people who liked debating and theorizing
    • Attracted people who prefer extensive contemplation before action
    • If you have enough of these people together, founder effects will conspire to bias people against taking any action at all
    • We have a (much) higher than baseline prevalence of mental illness
      • Depression
      • ADHD
      • Anxiety
      • Autism
    • We have a high proportion of people who are intelligent enough to enter the upper echelons of society, but who fell through the cracks for one reason or another
    • Draw disproportionately from people with liberal, upper-middle-class values
    • These traits can have effects even when they're possessed by a small number of community members, due to tipping point effects
    • The rationality community is not immune from entropic forces simply by virtue of calling itself the rationality community, and we need to invest effort to mitigate the effects of those entropic forces
    • Bendini categorizes the rationality community into three main focus areas
      • Impact focus
        • Effective altruism
        • Ambitious startups
        • Major societal change
      • Human Focus
        • Relationships
        • Meaningful work
        • Happiness
        • Fun
      • Truthseeking focus
        • Curiosity for its own sake
        • Deep theoretical models
        • Empiricism
    • Each of these main focus areas has its own problems
      • Truthseeking focus
        • Deep theoretical models, particularly for psychology and sociology, don't model reality very well
        • Socially maladjusted people come up with these models - leads to "blind leading the blind"
        • People from blue-tribe high-trust environments come up with models that work in that environment and don't work outside of it
        • Idealism preventing people from making adjustments to their models in the face of contradictory data
        • The principle of charity leaves us open to people who would intentionally exploit the community, like Gleb Tsipursky
      • Impact focus
        • Inability to recruit underrepresented demographics
          • The rationality community has its own jargon, which is often incomprehensible to outsiders
          • Far too much reliance on concepts that are familiar only to people with physics, computer science or science fiction backgrounds
          • The community has been optimized for a narrow demographic, even though rationality has benefits for a much broader set of people
        • Inability to run non-business oriented projects
          • Project leaders end up doing far too much work because of cultural individualism and treating dissent as a terminal value
          • Analysis paralysis, combined with short attention spans leads to lack of decision-making
          • We have no ability to motivate people to do grunt work without paying them - this is a problem for volunteer projects
          • Volunteers aren't pro-active - means that leaders have to ask for everything to be done
            • This is partially due to the lack of transparency in the rationality community - it's difficult to volunteer for anything if you don't know what things are there to volunteer for
            • Even something as simple as a task list would be helpful here
          • The number of volunteers for a project is tied to how "shiny" the project feels
            • Leads to projects expending a significant amount of energy on publicity
            • Yeah, but how is this different than any other volunteer effort?
        • Distrust of outsiders reducing intake and spread of information
          • Not seeking (or even rejecting) assistance from people and groups who resemble "the system"
            • c.f. MIRI & Eliezer's relationship with traditional academia
          • Distrust of outsiders leads to overreliance on skills possessed by insiders
          • Evaluation of skills has more to do with shibboleths than substance
          • We end up re-inventing the wheel because we don't read and follow advice from those outside the community, which reduces our progress
        • Lack of focus on instrumental rationality
          • The rationality community is much more talented at reading, writing and debating than it is at anything that could be described as practical
            • I have long bemoaned the rationality community's inability to accomplish even the simplest tasks that don't involve software
            • How many in the community can change the oil in their car? How many can use a electric drill? How many can change the batteries in their thermostat?
          • The grand purpose of LessWrong was AI alignment, and, as a result, focus on AI alignment has sucked all the oxygen away from more pedestrian causes
            • This might not be a bad thing, if we, as a community, take the possibility of unfriendly AI as the most important thing to be working on
          • More impact focused people have moved over to EA, making already bad demographics even worse
          • People who are interested in individual rationality have left, because the community is a time-suck
          • Underacheiving demographics have contributed to a cultural undervaluation of hard work and attention to detail
      • Human/Community focus
        • Romantic dissatisfaction of straight men
          • Huge gender gap means that men in the community can't have romantic relationships within the community
          • Inability for many straight men in the community to communicate effectively with women who don't have a rationalist or hard-science background
          • The community is passively hostile to women insofar as it is not a place where intellectually capable women wish to spend time
        • Difficulty forming deeper friendships
          • People have trouble making friends not because they especially desire solitude but because they just don't know how to make friends
          • Biases towards social awkwardness and passivity combine to make it difficult for people to have the recurring interactions that result in friendship
        • Difficulty executing short-term plans
          • People flake on plans that they've previously agreed to because of a combination of
            • Social anxiety
            • Low empathy
            • Poor time management
            • Inability to anticipate future selves' behavior
          • People don't inform when they're going to be late - this is rude
          • It's difficult to make plans when you don't know when or how many people are going to show up
        • Almost complete inability to execute on longer time horizons
          • People struggle with stepping outside of their day-to-day lives and looking at things from a higher level
          • If you ask people what their plans are 5 years from now, you'll get a shrug in response
            • While this is bad for the community, it totally makes sense from an individual perspective - literally none of my 5 years plans have come to fruition, because of high-impact events that have occurred
            • If you'd asked me 5 years ago where I'd be, I'd have said Amazon
            • If you'd asked me 5 years before that, I'd have said that I'd be stuck in Minnesota pretty much forever
            • 5 years is a long enough span of time that it doesn't really make sense to plan over that time horizon - either things will go wrong or new opportunities will arise and you don't want to feel locked in to a particular course of action
          • Values like loyalty are seen as Red Tribe (outgroup) traits and are responded to with derision
        • Lacking a sense that more is possible
          • People don't know what great communities actually look like
          • Modern atomized society has a lot of flaws, and we don't always see them because we've grown up inside that society
            • More to the point, some of us have seen only the downsides of highly coherent societies, and have not benefited from the upsides
        • People feel the need to sell themselves
          • High turnover in the community means that first impressions become only impressions
          • This leads to people selling themselves in whatever light will work out best for them
          • Then, in turn, people who are less willing to engage in signalling for its own sake feel like losers in a game they never agreed to play
    • There are three things we can do to attempt to resolve these demographic and cultural issues
      • Throw everything we have at drastically altering our demographic make-up
        • Likely to fail
        • Likely to ruin the community in the process
      • Attempt to start afresh, severing ties with the existing community
        • But how do you know you're going to do better?
      • Shift culture in an attempt to compensate for weaknesses
        • The most promising option
        • Still far from a guaranteed success
        • Requires us to pay attention to unpleasant things rather than reading insight porn
  • Environment: Picking The Wrong Location

    • There is increasingly no distinction between "the rationality community" and "rationalists living in Berkeley"
    • Putting the center of the community in Berkeley was quite possibly the worst strategic decision that we made
    • Berkeley's values are different from and contaminating rationalists' values
    • Most rationalists and rationalist organizations do not benefit from being located in Berkeley
    • From the outside enthusiastic people go to Berkeley, go quiet on social media, and when you next see them, they don't seem quite like the person who left
  • The Background Cultural Environment

    • Berkeley is the most politically correct city in America
    • How can we have a community dedicated to free speech and open mindedness in a city famous for social-justice witch-hunts?
    • Being in Berkeley exposes the rationality community to Silicon Valley demographics, amplifying founder effects
    • These background effects actively get in the way of addressing the problems caused by demographics (above)
  • Social Turnover Has Increased To The Point Where It Has Major Effects On Incentives

    • Commnunities, especially individualistic communities have a neutral attitude towards turnover
    • People coming and going is seen as the natural state of things
    • However, businesses tend to assign a much higher cost to turnover due to:
      • Loss of insider knowledge
      • Ramp-up time for new recruits
      • Reduced coordination between unfamiliar individuals
    • Social turnover reduces the consequences for defection (making it harder to commit to short and long term plans)
    • We are fortunate to have stumbled across an interest that is distinctive and deep enough to form a subculture around
    • The policy of pulling people into the Bay Area is harmful to other rationalist communities, and arguably not helpful to the people moving to the Bay Area
      • I wonder what the proportion of the growth of the Bay Area rationalist community is of people moving to the Bay Area from other rationalist communities, vs. people in the Bay Area signing on
    • The combination of the non-confrontational background social environment of Berkeley plus the social turnover in the rationalist community means that it's really difficult to give people uncomfortable but productive feedback
  • Economics: Time, Money, Spoons and Future Plans

    • Living in Berkeley is really expensive
    • This cost of living has second-order effects that magnify the damage to the rationalist community
    • The cost of living in Berkeley means that people have to get relatively high paying jobs and accept long commutes in order to live there
    • In addition to the cost of time, the increased stress of living in Berkeley means that people just have less energy to devote to rationalist projects when they get off work
    • The high cost of living in Berkeley means that it's really hard for non-profit efforts to recruit and sustain people
    • Projects have to produce something that can be monetized relatively quickly
    • The insane housing prices in Berkeley make it so that it's impossible for people to acheive "normal" life goals like buying a house or raising a family
    • The housing crisis is politically unsolvable - there are too many forces against lifting restrictions on building for house prices to reach a more reasonable equilibrium any time soon
  • Culture: Not Taking The Sequences Seriously

    • Most everything that Bendini is talking about is mentioned as a pitfall in the original sequences
    • The default path of all groups is to ignore the literal interpretations of the founding texts in favor of the current cultural zeitgeist
  • Recap

    • So why did the rationality community go wrong?
      • Demographic factors formed multiple feedback loops that reduced our ability to operate effectively
      • We chose to centralize in Berkeley, which caused further feedback loops
      • The high cost of living in Berkeley drained members living there of the ability to combat these feedback loops
      • We ignored the lessons of the sequences that would guard against these tendencies in favor of whatever insight porn was making the rounds at the moment
  • What can we do about this?

    • Berkeley may be lost
      • Too many demographic problems
      • If the software industry can't fix these problems, despite having a million times the resources of the rationalist community, what hope is there for rationalists
    • Berkeley's economic and social problems are present in most other rationalist hubs, including Seattle, Boston and London
      • Oh, really? I mean, maybe it's because I'm part of the Seattle rationalist community, but few of these criticisms actually ring that true for this community

The Craft and the Community: Resurrection

  • Intro

    • To do better than Berkeley, we need better
      • Demographics
      • A location that allows us slack to pursue rationality related projects
      • A culture that upholds rationalist principles
  • Demographics - Getting A Broad Range of Talents

    • The goal is to recruit undervalued demographics rather than underrepresented ones
    • The rationality community, because of its demographics, undervalues problems that cannot be solved with logical analysis or writing code
    • To this end, there needs to be a strong focus on narrowing the gender gap
      • Having nearly equal gender ratios means that everyone gets their romantic needs met
      • Women bring valuable skills in coordination and conscientiousness that the rationality community lacks
    • Focus on in-person recruitment
      • Introducing people to rationality related concepts is much easier in person than it is on the Internet
      • People from the area will have much less friction when joining your community
      • People who attend in person often have much better demographics than people who engage with the community over the Internet
      • A location that has a variety of employment sectors will have better demographics to recruit from than the Bay Area
        • Okay, I don't get this equivocation - on one hand Bendini is saying that the Bay Area only has tech workers, but on the other hand, he's saying that tech workers will never be able to drive a more sane equilibrium on house prices because they're such a tiny minority
    • Create better introductory materials
      • Needing to read the Sequences is a bottleneck that prevents new members from engaging with the community
      • The full set of sequences is over 2000 pages long
      • Much of the material has little or no immediate benefit
      • We need a guide to rationality that delivers immediate personal benefits in relatively short order
    • Discover the reasons that cause people to leave and work to mitigate those reasons
    • Champion members displaying behaviors that you want to see more of
    • Figure out our value proposition for new members
      • As someone joining the rationalist community, what can I hope to gain by becoming a member?
      • If the answer is "interesting dinner conversation", the rationalist community does not have a good value proposition
    • 2 Examples of demographics that we want to attract
      • Feminine/people-oriented women
        • Place greater emphasis on community promoting norms
          • Value things like loyalty, friendliness and caring for others
        • Tolerate elements of benevolent paternalism
          • Have figures who can make difficult decisions
          • Have figures who can remain steadfast in the face of social pressure
          • Sigh. "Benevolent paternalism" is just leadership. We don't need a new phrase for it
        • Don't undervalue female coded interests and values
          • Greater focus on empathy and people-awareness
          • Evaluate whether a thing is undervalued because it's not useful or because it's coded female
          • Greater focus on teamwork
        • Better onboarding/recruitment
          • Reduce inessential weirdness - don't expose new members to the full set of strange beliefs that rationalists have
          • Develop a marketing message that appeals to people-oriented individuals
      • Already successful people
        • Demonstrate clear value - show how someone who is already successful can become even more successful by using formal rationality
        • Don't resent the disparity in success between you and the person you're trying to recruit
        • Treat them the same as others in the community - don't let their success turn you into a yes-man
        • Create a community that is more than the social network of last resort for intelligent underacheivers, depressives and socially clueless
  • Environment: Manchester Works For Us, We Don't Work For Manchester

    • Why Manchester?
      • Low cost of living
      • Decent hourly wages, good job opportunities in our chosen fields
      • Low housing costs, and growth in the housing supply
      • Stable institutions and good rule of law
    • For the goal of human flourishing
      • Background demographics
        • Diverse range of industries
        • Good gender balance
        • English-speaking majority
      • Background culture
        • Not too atomized
        • Don't want to try to build a community in a city where people move to to acheive career goals, as that results in high turnover
      • Good public transport system
      • Aesthetic beauty
  • Culture: More Productivity, Less Philosophy

    • Create a culture that values effective work
    • Stop values from being dictated by happenstance
    • In order to build a culture that has the values we want, we need a decent grasp of how cultures are actually formed
    • Need a benevolent dictator to solve coordination problems and prevent value drift
    • Make good decisions now instead of perfect decisions in a month's time
  • Conclusion

    • This is an effort by Bendini to find kindred spirits

My thoughts

  • I read that entire essay twice, and I still don't know what Bendini is trying to do
    • Is he trying to make a group house in Manchester? Why not come out and say that?
    • Is he trying to make a better CFAR?
    • What are the goals of the Kernel project, and how will we know when those goals are acheived?
  • On a more meta note, I think the entire concept of "rationality" may be a seductive trap for smart people
    • It's tempting to think that there exists a set of domain independent cognitive strategies that will make you more effective at whatever domain you're trying to get better at
    • We call this hypothetical set of domain independent strategies "rationality"
    • But, increasingly, I'm convinced that this domain independent set of strategies does not exist
    • If you want to get good at task X, the skills you need (cognitive and non-cognitive) are going to be different than the skills you need for task Y
    • There are some skills (knowledge of cognitive biases, ability to "step outside the system", etc) which have a limited amount of generality
    • But these skills are:
      • Few and far between
      • Not completely generalizable - there are lots of domains in which knowledge of cognitive biases or ability to step outside the system actually isn't all that helpful
      • Not actually that much of a force multiplier
    • Eliezer said one of his goals with rationality was to find people who were noticeably "better", but he didn't say what they were "better" at
    • The notion of a "renaissance man", who is better than his peers in a wide variety of fields is a myth. Having just returned from Italy, I've found that even actual renaissance men (like Da Vinci, for example) didn't have nearly the impact that we would assume they would have
      • Da Vinci, for example, was extremely flighty, and would often abandon commissioned and paid-for projects because he lost interest
      • I now have way more respect for artists like Raphael, Tiziano (Titian), Tintoretto, Lippi, etc. They're not as well known as Da Vinci, or Michaelangelo, but they were way more consistent, way more productive, and honestly, they were just as good (at art) as Da Vinci and Michaelangelo
    • So, my advice? Pick something you want to be good and… and then work on getting good at that thing
      • Want to be a better programmer? Read algorithms, learn lots of programming languages, ship code.
      • Want to be a better artist? Draw, paint, sculpt, whatever every day
      • Want to be a better X - find the skills that help you with X and practice those
      • Rationality will at best bring you a ~5% improvement. It's not zero, but it's not life-changing either. Learn enough about it so that you can effectively come up with strategies to get better at whatever you're trying to bet better at
        • This quantity may very well be 0. In certain domains (especially competitive domains) it may very well be the case that the domain has been well studied enough that you can follow experts to come up with good strategies, and you may not need rationality at all
    • With that in mind, I'm going to try to reduce my exposure to rationality in the abstract and increase my exposure to people with actual skills
      • Fewer rationality meetups; more programming meetups
      • More reading about object level things, rather than abstract rationality

2018-01-08 RRG Notes

Am I Truly Mardukth

  • Why is the state of human welfare so bad, even for those at the top of Maslow's Hierarchy?
    • Lots of people have their basic needs met
      • Food, shelter, clothing
      • Relationships
    • So why are they still unhappy?
      • Claim that needs aren't being met
        • Not enough money to be secure
        • Social situation is precarious and anxiety-producing
      • This may be true for some people, but for many people, these worries are completely disconnected from any external truth
        • Some of the richest people on earth worry about financial insecurity
        • Some of the most popular, well-connected people worry about social status
  • Before we ask why we're unhappy, we should ask why we should ever expect to be happy in the first place?
    • The role of happiness is not to serve as a baseline for human experience
    • The role of happiness is to serve as a reward for successfully performing adaptive behaviors
      • Citation needed, here - people have different baseline levels of happiness
      • Just going by my personal evidence, most non-rationalists actually are pretty happy, by and large - claiming that people aren't happy makes no sense when most people actually are happy
    • If people are happy all the time, then happiness fails as a behavior management tool
      • Not really? You can be happy, and take away that happiness for failing at something
    • But modern society is so good at providing for our basic needs, the periodic bursts of happiness we get have lost their meaning
    • This produces aimlessness and anomie
    • But this doesn't mean that modern society is bad - aimlessness and anomie are far preferable to a situation in which you're constantly stressed out over meeting basic needs
    • If we're ever going to build a golden age of human welfare, we need to abandon the notion that there has been any point in the past which has been a golden age
    • Anomie has replaced
      • The terror of deprivation
      • The soul crushing tyranny of tightly-knit all-powerful social structures
        • Except, for most people in those social structures, it's not soul crushing. It's only soul crushing for deviants like you
  • We can use the lab-rat/pellet metaphor as a model for the problems that people face as they approach the top of Maslow's hierarchy
    • Under traditional circumstances, people experience life as a constant quest for small psychic rewards
    • These rewards are earned for performing useful tasks that fulfill physical or social needs
    • However, modernity allows humans to fulfill these needs with such ease that the rewards for their fulfillment loses all psychological weight
    • This produces an aimlessness and loss of life-structure that people respond to by flailing about in various dysfunctional ways
    • This phenomeon has afflicted elites for as long as we've had civilization, but now it afflicts ordinary people as well
  • 3 basic solutions:
    • Enlightenment (in the Buddhist sense)
      • Restructure your mind and modify your utility function so that you no longer desire the sensation of successful reward-hunting
      • Doesn't scale
      • Buddhism has been the most popular religion in the most populous region of the world and it has only produced a handful of enlightened sages
        • The premises are factually wrong, and this demonstrates a gross misunderstaning of Buddhism.
        • Buddhism has never been the most popular religion in East or Southeast Asia - that distinction belongs to Hinduism, Shintoism, Taoism or Confucianism, depending on the locale and time period
        • Moreover, you're eliding over the difference between Theravada and Zen Buddhism - not all Buddhism is about attaining enlightenment!
        • Just annoys me because this is the rationalist equivalent of watching a bunch of anime and thinking that this gives you special insight into Japanese culture (or equivalently, watching MTV and thinking this gives you special insight into American culture)
    • Create artificial needs whose fulfillment is associated with artificial rewards
      • More commonly known as "setting life goals"
      • This is not a bad thing
        • If it's not a bad thing, why do you purposefully describe it with words which have negative connotations?
        • "I'm not knocking it, except for the part where I knocked it
      • Necessary if we want people to keep striving and achieving even though all their minimum goals have been met
      • The problem is that this isn't enough
      • Self-imposed needs are much weaker than basic needs in driving motivation
      • You can't question the importance of basic needs like you can with self-imposed needs
        • No one will ever call looking for food, shelter, clothing, etc, "pointless"
      • Bad things happen when people tie too much of their identity to their goals... and then fail to achieve those goals
      • People have a limited amount of drive and emotional resilience
      • If people tie up large parts of their identity in their goals, it becomes impossible to question those goals without implicitly attacking people's identities
      • People need some basis for psychological well being beyond succeeding at goals
    • Narcissism
      • Has an unfortunate negative connotation
      • Could also be called "narrative-orientation" or "symbolic-orientation"
      • Tell yourself a story about who you are and why your life is worthwhile
      • Everyone does this to some extent already
      • Create a narrative that explains who you are and why you're valuable, as a person
      • Whenever you feel hurt or worthless, you check the circumstances of your life against the narrative
      • If the circumstances still match, then you can reassure yourself that everything is basically OK and move on
      • This is a semi-stable source of human welfare
        • What are you referring to as "welfare", here
      • The benefits of narcissism appear even when you're operating near the top of Maslow's hierarchy
      • Benefits appear even when you're not achieving your self-imposed goals
    • The problem is that narcissism has some spectactular failure modes
      • People can build their identities around a bad identity
        • Narrative that requires people to engage in constant destructive action to maintain
      • People can allow their narrative to drift arbitrarily far from reality, so long as they have a shred of evidence or past accomplishment to allow them to maintain their narrative
      • Narcissistic injury
        • Narrative is always going to be somewhat in conflict with reality
        • These conflicts have the potential to be damaging
        • At best, they nullify the benefits of having a narrative
        • At worst, if too much of your identity is wrapped up in the narrative, a successful attack on that narrative will leave you without a sense of self
        • There are those who believe that many of the problems with the developed world are due to people dealing with narcissistic injury poorly
        • The conventional answer to dealing with narcissistic injury is to be less narcissistic
        • However, it may not be possible to be a functional human being operating in modern civilization without a certain baseline level of narcissism
      • The task is to be able to construct one's identity in a way that delivers the benefits of narcissism without its weaknesses
  • Three-pronged approach to creating identities
    • Build identity around worth, virtuous stories, rather than crappy stories incompatible with utopian existence
      • Whose utopia? Every utopia is a dystopia when looked at from a particular perspective, and every dystopia can be reinterpreted as a utopia for those who are privileged.
      • If you're suggesting that some narratives are incompatible with utopian existence, then you first have to describe your utopia
    • People have to be taught to keep one eye on reality so that their constructed identities don't diverge arbitrarily from the facts of the world
    • Human relationships and social institutions have to be constructed to reinforce identities rather than tearing them down
      • No, no, no, a thousand times no! I do not want more "safe spaces" for special snowflakes. I want the world to challenge identities. You should have to justify yourself to the world. The world should challenge you. It should not be a given that everything is okay and that you are a good person.
  • My thoughts:
    • I question the very premises of this article
    • I do not think that the problem with the world is one of "affluenza". I think the problem with modern industrial civilization is
      • We tell people that they have to perform economically meaningful, "productive" work in order to be a worthy human being
      • We continually remove opportunities for people to perform this work by outsourcing it to robots or overseas countries
      • Predictably, this results in alienation, as people either think worse of themselves, or recognize the hypocrisy in a society that constantly pushes for the importance of work, while making it harder and harder to get a job
      • This isn't a problem with people's individual identities, it's a problem with society, and telling people to construct better identities for themselves won't save them from the social pressure that tells them of the importance of work
    • In addition, I think this article falls into the rationalist trap of overthinking things
      • Why should I spend time forming an identity rather than learning skills?
      • Heck, why should I even bother spending time reading this article? I could be writing Python or learning Go (the board game or the programming language, I feel like either would be a more productive use of my time)

The Story of the Self

  • Is our identity a real and definite thing, or is it just a useful model to describe a complex set of sometimes contradictory things about us?
  • It's neither
    • The purpose of identity is not to serve as a predictive model about what we or someone else is going to do
    • Identity is about aesthetics, not making useful predictions
  • Maintaining an identity narrative about yourself allows you to appreciate your own life in the same way that you appreciate stories
    • This is good because people appreciate stories in a more powerful way than they appreciate facts about the real world
      • Buh... wha?
      • I don't identify (pun fully intended) with this at all
    • But what if you don't appreciate stories? What if you appreciate the real over the fictional?
  • Before we can talk about constructing narratives for ourselves, we need to talk about narratives more generally
    • In the West, the dominant form of fiction has been the Literary Novel
    • About normal people doing normal things, and having normal feelings
    • Explores those feelings in detail
    • This didn't set well with geeks/nerds etc. who wanted more exciting stories, so they migrated off into "genre fiction"
    • Only, when re-imagining those stories, they ended up reinventing literary novels in the form of "coffeshop AUs" or "high-school AUs"
      • Okay, really? You're going to draw generalizations about nerd culture from fanfic?
      • Eventually all the cool parts about worldbuilding and setting and narrative get rounded off and all that remains is exploring the interactions between the characters themselves
        • Uh... okay, maybe you need to stop reading bad fanfic and read fanfic that's competently written
      • This is weird
        • It's weird because it's wrong
    • Why is it that we dislike people talking about boring emotional drama, but are willing to put up with boring emotional drama in novels and stories?
      • Who's this we you're talking about? I don't tolerate boring emotional drama in either my novels or my stories - this is why I don't like reading Dickens or Austen (I read Jack London and Hemingway instead)
      • A story, simply by virtue of being told as a story is acknowledged as a story worth telling
      • Stories and narratives are rituals designed to allow us to process events in a more detached, contemplative, and emotionally-responsive way
      • Someone being a protagonist in a story is a signal that their story is worth caring about, more so than those of the other characters
  • People are capable of constructing narratives about themselves, and this is awesome
    • It makes our experience less mundane
    • Allows us to rise above the eternal scramble for small successes
    • Allows us to define ourselves by more mythic concepts
      • Yes, at the cost of severely harming our ability to perceive the world as it is, rather than the world as we wish it to be
      • This person is advocating chuunibyou - a regression to adolescence where we all pretend to have special powers and are the protagonists of our own oh-so-special stories. Fuck that noise
  • Sidebar:
    • Since narratives are about aesthetics rather than truth, we can have elements in our narratives that aren't empirically valid propositions
    • Example: elemental affiliations
      • This is meaningless on a propositional level
      • But people incorporate it into their identities all the time
      • If you claim to be affiliated with fire, you claim to take on the traits of fire in some not-very-well-defined way
      • This claim allows you to build a framework around the way you relate to yourself and others, and allows you to find resonances between events in the world and events in your own life
        • See, that's exactly the danger. Those resonances are false. They are coincidences. You are reinventing the most dangerous parts of religion, and claiming that it's a good thing
  • My thoughts:
    • Why do we consider this person to be a rationalist?
    • They are advocating anti-rationality - they are literally saying that you should blind yourself to reality in selective ways in order to build narratives that make you feel better
    • This strategy is often spectactularly effective, right up it fails spectacularly - the annals of history are littered with people who believed grand narratives about themselves, got other people to believe those grand narratives, and ended up ruining not just their own lives but the lives on thouands, millions, or even billions of other people
    • I don't dispute that narratives are powerful. My contention is that they're too powerful - because of our inherent biases, they're too effective at blinding us to reality and keeping us on a path that leads to disaster

The LARP of the Covenant

  • Balioc is, among other things a LARP-er
  • Participates in "theater LARP"
    • Each player is assigned a character and provided with a "character sheet"
    • Character sheet mostly has narrative explaining the character's background and goals
    • "Like a play, but without script or audience"
      • Okay, so is "theater LARP" just a geek term for "improv"?
  • LARPs can hit really hard and people get really into them
    • Individual LARP experiences can have profound effects on participants
    • People obsess over roles and roleplay experiences
    • See facets of the real world through social or metaphysical lenses derived from games
      • See, this is exactly the thing I was warning about in my commentary from above - this exactly the sort of thing you do not want to do if you're interested in seeing the world as it is
    • Sometimes people shift their own personalities to better match the personality of a character they particulary identify this
      • That's normatively awful - you're letting stories bleed over into the real world
    • While this sort of thing happens with every narrative experience (TVs, books, etc) it happens much more often with LARPs, and that's surprising
      • Why is it surprising? Books and movies allow you to remain somewhat detached from the narrative, whereas LARPs immerse you in it as a participant
      • LARPs are a young medium
      • Many of the narratives are unpolished
      • Unlike other forms of media, LARPs lack descriptive realism
      • Yet, despite the unpolished narratives and lack of descriptive realism, something about LARPs draws people in
  • The most common explanation for why people get into LARPs is that it's escapist fantasy
    • You can be the wizard or hero or whatever
    • But why is this special to LARPs over other forms of interactive media?
    • When people leave LARPs they don't talk about the escapist fantasy aspects - they talk about the emotions and relationships
      • I would be interested to see a comparison between LARPs and tabletop roleplaying groups
    • The stickiest parts of LARPs are the parts that are awkward simulcra of everyday life - why?
  • Real life may be full of drama and wonder, but it doesn't feel like a story, from the inside
    • But a LARP is a story, and your character is important, just by virtue of being in the story
    • The other players in the LARP acknowledge your character as important, by virtue of also participating in the same LARP
    • The LARP also provides a shared context that allows you to communicate your narrative self with other participants in the LARP
    • In other words the LARP is pre-packaged psychological validation
  • The problem with LARPs is that it's totally fake
    • Well, that's one of the problems, and I'd argue that it's not even the most important one
    • The identity that's being honored isn't your identity
    • But maybe, if we can turn reality into a LARP, we can get the benefits of LARPing without the disconnection from reality
  • My thoughts:
    • I'm not sure it's possible - you're trying to bridge two things that are diametrically opposed to one another, and hoping that you can pull it off with a technical fix
    • It's called the narrative fallacy for a reason. Narratives, by their very nature, impose a filter on reality, and cause you to ignore and rationalize things that don't fit with the narrative
    • You don't get to choose ahead of time what your narrative filters out

What's Your Type: Identity and its Discontents

  • Identity can be viewed as a generalization of market segmentation
  • People crave to be told what sort of a person they are, even though the categories they're divided into may be functionally meaningless
  • Personality quizzes help people identify a place in the world for themselves
  • Heidegger's concept of dasein, which translates to being is better understood as identifying as
  • Dasein is all about exaggerating a particular aspect of yourself in order to fit in with a particular group
  • The problem with dasein is that it makes it impossible to assess merit when questions of identity are in play
  • It becomes impossible to state objective truths because the things being stated are inevitably caught up in the identities of the people saying them
  • Do we want to try to channel identifying-as into safe forms of pretend-play or do we want to try to get rid of it entirely
    • I suspect that balioc and I have exactly the opposite answer to this question

Responsa

  • You don't want to either neuter or eliminate people's drives to build identities
  • Identity is necessary for hedonic well-being
    • Identity is important insofar as it gives you a community to belong to
    • The problem is that it then makes it impossible for you to meaningfully criticize that community without immediately triggering a whole host of built-in tribal reflexes
  • If you don't rely on identity construction as a major part of your introspective well-being, you're suffering from major introspective failure
    • Or, on the other hand, like the majority of people out there, we're not overanalyzing every little thing about ourselves and, as a consequence are actually having fun instead of wondering whether we're having fun
  • Identity is a major cognitive hazard, and one of the challenges is to find out how to get the benefits of having a strong identity without its downsides
    • balioc talks about the dangers of building strategies around unresolvable contradictions, and here they are attempting to do just that
  • To balioc, identity is so tightly coupled to human flourishing it may as well be a terminal value
    • Here we go with the undefined terms again. "Human flourishing" - what is that? What does a flourishing human look like?
    • I suspect that balioc has a very particular view of what a flourishing human looks like and can only vaguely imagine a "flourishing human" that doesn't have a strong personal identity
  • Identity is what defines the self and gives it form
  • We consider self-actualization as the highest of Maslow's hierarchy of needs because the ultimate goal is to be able to look upon yourself from the outside and revel in the concepts that give it shape
  • balioc's project is to ensure that the future that we construct is one in which people who need a strong identity for themselves can thrive
    • See, they should have opened with this - saying, "None of what I'm saying may apply to you, but there is a tribe of us out there, and we would like to have our needs acknowledged," rather than pretending that everyone ought to have a strong sense of narrative about their own lives
  • We need ways for people to create externally validated identities without losing touch with reality or opening themselves up to undue narcissistic injury
  • This requires cultural engineering to set up incentive structures, compelling symbols and self-reinforcing behavior norms
  • Even though this is extremely difficult, it's worlds easier than psychologically engineering people to not require an identity in the first place
    • Is it? I'm not actually sure about that. It might actually be easier to biohack people (ems, whatever) to not require identity than it would be to modify culture to make identity formation more wholesome

2017-12-18 RRG Notes

Slack

  • Slack is the absence of binding constraints on behavior
    • Slack gives you margin for error
    • Slack allows you to relax and trade
    • Slack gives you optionality - you can avoid bad trades and wait for better deals
    • Slack gives you room to invest for the long term
    • Slack gives you the room to stand by your moral code
  • Related Slackness
    • Slack in project management is the amount of time a task can be put off without causing further delays
    • A slacker is one who has a lazy work ethic, or otherwise does not exert maximum effort
  • Out To Get You and the Attack on Slack
    • The world is filled with demands on your time, money and attention
    • If you accede to too many small demands, you'll run out of reserves
    • Acceding to a single large demand often makes the demand expand to completely take up your reserves
    • Sometimes this is okay - sometimes you have to exert maximum effort
    • But okay doesn't mean it's sustainable
    • Most times are ordinary, so it's fine to make an ordinary effort
  • "You can afford it"
    • People like to tell you that "you can afford things"
    • But everything you can afford takes a bite out of your reserves
    • If you accept too many demands, you quickly run out of slack
    • Instead of asking whether you can afford it, ask whether it's worth it to you
    • Affordability tells you what you can't have, not what you should have
  • The Slackless Life of Maya Millenial
    • Things are really bad when the presence of slack if your life is viewed as a defection
    • The world sees slack as insufficient dedication and loyalty
    • If you're not putting your all into your image, your career, or your cause, you're a "failure"
  • Make sure that you have slack under normal circumstances
  • Respect the slack of others

Confessions of a Slacker

  • Slack is the distance from binding constraints
  • Slack can be expressed as
  • Slack disappears when the spare capacity of any single resource goes to zero, regardless of how much of everything else you have
  • Maintaining slack requires balancing all important resources, making sure to shore up the scarcest resources first
  • Concentrate on getting the resources that you need, not the resources that are easiest to obtain
  • Unfortunately, for most people the scarce resources are the ones that they're the worst at acquiring, and probably have aversive thoughts about
  • However, ignored constraints don't go away - they're still binding you
  • When no single resource is very scarce, you can figure out exchange rates between resources to help shore up resources that you're drawing down faster
    • Example: how many dollars is an hour of free time worth?
  • Remember to account for intangible resources, like goodwill, when tracking slack
  • The main advantage of slack is that it gives you optionality - the ability to change your plans and explore new paths
  • Be very careful of highly competitive environments - they incinerate slack
    • "You waste years by not being able to waste hours." -- Amos Tversky

The Rules About Responding To Call-Outs Aren't Working

  • Privileged people ignore marginalized people
  • Social justice spaces attempt to fix this with formal rules about how privileged people should respond to marginalized people
  • Formal rules
    • You should listen to marginalized people
    • When a marginalized person calls you out, don't argue
    • Believe them, apologize, and don't do it again
    • When you see others doing what you were called out for doing, call them out
    • I don't even agree with the formal rules here.
    • Replace "marginalized people" with "House Unamerican Activities Committee" or "Central Soviet"
    • I especially vehemently disagree with the obligation to call out others for engaging in the behavior that one was called out for
  • It is not possible to follow these rules literally because:
    • Marginalized people are not a monolith
    • Marginalized people have the same range of opinions as privileged people
    • When two marginalized people tell you different things, it's logically impossible to follow both directives
    • Objectifying marginalized people doesn't create justice
  • Since the rules are impossible to follow, what actually ends up happening is
    • One opinion is lifted up as "the opinion of marginalized people"
    • "Listening to marginalized people" consists of agreeing with that opinion
    • Disagreeing with that opinion is equated with "talking over marginalized people"
    • This results in fights over who is the "true voice" of marginalized people
  • These rules also leave marginalized people open to sabotage
    • People use language of call-outs to sabotage effective leaders
    • Rules about shutting up and listening to marginalized people make it very difficult to counteract lies and distortions
  • Rules are also exploited by abusive people
    • Abusive person convinces the victim that they're a marginalized person
    • Rules about listening to marginalized people prevent victim from asserting their rights
    • Abusers can send victims into depressive spirals by claiming that everything that brings the victim joy is a symbol of oppression
    • Abuser may separate victim from friends or allies by spreading rumors about oppressive behavior
    • Rules that say that some people should unconditionally defer to others are dangerous
  • Rules lack intersectionality
    • No one experiences every form of oppression
    • Call-outs are often between people who are oppressed in different ways
  • Rules prevent groups dedicated to one form or marginalization from allying with groups dedicated to other forms of marginalization
  • Need to "start listening for real" in order to do better by each other and work through conflicts in a substantive way

A Lament on Simler, Et. Al

  • Critique of The Leaning Tower of Morality
    • Leaning Tower of Morality claims that morality is a result of group selection, rather than individual selection
  • Conflates evolutionary self-interest with the ordinary usage of the term
  • Obscures the fact that people aren't self-interested, it's genes that are self interested
  • It's entirely possible for organisms to display altruistic behavior, while still having self-interested genetics
  • The evolutionary origin of altruistic behavior confuses us, because we think that they're "fake" in some way if they're the result of evolution
  • But that's confusing outcome and process - it's possible for a selfish process to have altruistic outcomes
  • When we talk about genetic selfishness, we're talking about the process that results in our behaviors; when we talk about ordinary selfishness, we're talking about the behaviors themselves
  • Even though we use the same word for it, it's not the same concept
  • The opposite also applies
    • Something that is rational for your genes need not be implemented in the rational part of your mind
    • It's possible for "crimes of passion" to be "genetically rational" while involving no rational thought whatsoever
  • Disambiguate what the right thing is from why you're set up to believe that's the right thing
  • There is a complete break between ourselves and the process that created our minds - our morality stops at our thoughts and actions
  • Human morality cannot and should not encompass evolution
  • A neurotic desire for approval
    • The world outside of our minds is fundamentally amoral
    • Why do we care that there is no "true" altruism that's universal?
    • Why are we so desperate to justify everything to evolution?
    • Evolution is not God, nor should it be
  • The notion of selfishness and unselfishness are human constructs
  • In evolutionary terms, what spreads is what spreads - that's it
  • We assign moral values to mechanisms by which genes spread themselves and then get confused when evolution seems to work in ways different from our moral intuitions

The Cactus and the Weasel

  • Strong view weakly held describes hedgehogs better than foxes
  • Foxes are better described with "weak views strongly held"
  • If this is surprising to you, it's because you're more familiar with the degenerate forms of foxes and hedgehogs
    • Cactus - degenerate hedgehog - strong views, strongly held
    • Hedgehog - strong views, weakly held
    • Fox - weak views, strongly held
    • Weasel - degenerate fox - weak views, weakly held
  • True foxes and hedgehogs are rare and complex individuals
  • While both foxes and hedgehogs are capable of changing their minds in meaningful ways, weasels and cacti are not
  • Views and holds
    • The basic distinction between foxes and hedgehogs:
      • Fox: knows many things
      • Hedgehog: knows one big thing
    • Many things refers to weak views
    • One big thing: refers to strong views
    • To get a hedgehog to change their mind, you have to offer a single big idea that is more powerful than the big idea they already hold
      • However, this isn't as hard as it sounds because a hedgehog's "big idea" is anchored by a relatively small number of core foundational assumptions
      • Break one or two of the assumptions and the hedgehog will update their entire worldview
    • To get a fox to change their mind, you need to undermine beliefs in multiple ways in multiple places
      • Fox ideas are anchored by multiple instances in multiple domains
      • Connected by narratives and metaphors
      • To get a fox to change their mind, you have to undermine many fragmentary beliefs in many places
      • While each individual belief may be easier to undermine, there are a lot more beliefs that you have to attack
      • Not much coherence to exploit
    • It's a lot easier to make a hedgehog change their mind wholesale - if you can undermine the right one or two beliefs, you'll effect a conversion almost overnight
  • The strength of views
    • Beliefs aren't held as disconnected sets of atomic propositions
    • Most beliefs are organized into clusters that we refer to as "views" - each view corresponds to one or more problem domains
    • If all views are connected and relatively consistent, then we can refer to the connected set of views as a "world view"
    • Both foxes and hedgehogs have views, but only hedgehogs have world views
    • A view is a set of interdependent beliefs
      • Some beliefs are axiomatic
      • When they're undermined, the view collapses
      • Other beliefs are peripheral, and can be discarded without discarding the view
    • A strong view is one that encompasses a large set of beliefs, and is defended with the most literal interpretation imaginable
    • A weak view is one that encompasses a few critical beliefs and is defended with the most robust interpretation available
    • A strong view has some advantages:
      • Powerful - to the extent that the view is true or unfalsifiable, it is very useful
        • Offers practical prescriptions about how to live your life
        • Allows you to make detailed predictions about the future (which may or may not be correct)
      • Tedious to undermine, even though it is lightly held
        • Opponent must pass an ideological Turing Test in order to attack the view
        • Falsification must operate within the epistemology of the view
        • In some ways, your opponent has to know more about the view than you do
    • Unfortunately, for most views today, we have no idea what the critical central beliefs are and what the peripheral beliefs are, even for our own views
  • Changing your mind
    • We change our mind easily when dealing with isolated, atomic, peripheral beliefs
    • When people talk about changing minds being difficult, they're really talking about changing views
    • Changing a view is like replacing a program with another program
    • Changing a world-view is like replacing one operating system with another
    • Strong views represent a high sunk cost
    • In order to change a strong view, you need to:
      • Learn new habits
      • Learn new patterns of thinking
    • Order matters - people learn new habits first, and then change their patterns of thinking to match their new habits
    • Example: debating creationists
      • Trying to get a creationist to change their mind through logical debate is pointless
      • Start by getting them to use innocuous tools and habits whose effectiveness depends on evolutionary thinking
      • Then, once they understand the power of evolutionary processes, they'll be ready to change their minds
    • Paradoxically, this means that it's easier to change a hedgehog's mind than a fox's
      • Foxes don't have as much deep expertise, so it's more difficult to get them to change their mind by altering behavior
  • Strong Views, Weakly Held
    • Cultivate the ability to change world-views very fast
    • It is much easier to detect when one of your core beliefs has been undermined than it is to determine what the core beliefs are in the first place
    • This is because hedgehogs operate on habit; don't think about the fundamental logical structure of their views day-to-day
    • Therefore, and opponent can look at the logical structure of their views from the outside and see weaknesses that the person holding the view may be unaware of
    • A good hedgehog will realize that their opponent is a teaching aid and will cultivate the following skills:
      • Recognize when their world-views have been undermined
      • When a core belief in a view has been undermined, assume that all other beliefs in that view have been undermined as well
      • Build a new view, on top of new habits; treat old views and beliefs as false until proven otherwise
    • Beginner hedgehogs don't even realize when their world-view has been undermined
    • Intermediate hedgehogs realize that their world-views have been undermined but don't automatically doubt all of the beliefs in the worldview
    • Advanced hedgehogs realize that world-views have been undermined and doubt old beliefs, but can still sometimes follow old habits even as they're building a new world-view
  • Weak Views, Strongly Held
    • Foxes don't have strong-views, as detailed above
    • They have superficial views about a variety of fields, rather than a strong view of a single field
    • However, foxes hold their views strongly, in that they accept or reject locally fundamental beliefs based upon justifications from other domains (often driven by analogy or metaphor)
    • Views are anchored by many independent justifications from many different domains, rather than many details from a single domain
    • If the basic challenge for the hedgehog is switching world-views and updating all of the beliefs associated with that world-views, the challenge for the fox is to switch quickly from one pattern of organization of beliefs to another
    • When a belief has value in many domains, a refutation in a single domain doesn't necessarily mean the view is rejected
    • The difference between foxes and hedgehogs is what they do with domain-independent truths
      • Hedgehogs build totalizing world-views
      • Foxes build a "refactoring mindset"
        • What is a refactoring mindset?
  • Heuristic and Doctrinaire Religion
    • The real difference between foxes and hedgehogs is how they operate when they're outside their "home" domain
    • There are no pure generalists, nor are there pure specialists
    • Everyone is T shaped, but we emphasize different parts of the T
    • Hedgehogs are fat-stemmed Ts; explore world in a depth-first fashion
      • Start with home domain, pick the next topic, understand that topic completely, pick the next topic, etc.
      • Hedgehogs have thoroughly and repeatedly read a few books, and have completely ignored others as irrelevant
      • Hedgehogs rely a great deal on home-domain knowledge and prefer to apply that knowledge by applying abstraction and reasoning based on abstractions
      • Instead of using ad-hoc metacognition, they emphasize building systems
      • Hedgehog religions are highly doctrinaire, with strong normative rules
    • Foxes are fat-bar Ts; explore world in a breadth-first fashion
      • Foxes don't rely too much on home-domain expertise, instead relying on ad-hoc metacognition
      • Foxes have skimmed a lot of books, and read a few completely
      • Apply tools and techniques from a variety of domains to the problem until they find something that works or seems to work
      • Fox-type religions have lots of locally interpretable doctrine, and rely on participants using their own meta-cognition to pick out a belief-set that works for them
    • Both foxes and hedgehogs have advantages and disadvantages
      • Foxes
        • Foxes are great at thinking in a new domain
        • Can come up with analogies relating problems in the new domain to problems or solutions in the old domain
        • However, fox analogies often operate at the surface level, and aren't very useful as guides to action
      • Hedgehogs
        • Hedgehogs are a lot slower to get started in a new domain, because they have to come up with a systematic world-view
        • However, once that systematic world-view is built, actions become automatic and habit-driven, allowing them to operate much more quickly
    • So, given that hedgehogs have deep domain knowledge at home, and are able to outcompete foxes once they've built a systematic understanding of new domains, is there any advantage to being a fox?
  • The Tetlock Edge
    • The one edge that foxes have is prediction
    • Tetlock's data shows that foxes tend to be marginally less wrong at predicting the future than hedgehogs
    • Where does this advantage come from?
    • Foxes eschew abstraction and prefer analogy, metaphor and narrative
    • Foxes are constantly doing meta-analysis with unstructured ensembles
    • Abstraction can grow into doctrine - by eschewing abstraction, foxes render themselves immune to being locked into a particular way of thinking
    • The lack of abstraction prevents foxes from forming definite views of fields they haven't personally experienced
    • Foxes are better able to deal with disparate and contradictory pieces of data, whereas hedgehogs are more likely to discard contradictory data as noise
  • The Fox/Hedgehog Duality
    • Foxes don't throw away new data once an inductive generalization has been achieved
    • Hedgehogs have more rigid models and throw away data that doesn't fit their worldview
    • This means that foxes slowly gain an advantage as they accumulate data and can find patterns that hedgehogs miss
    • However, it takes a long time for foxes to gain this advantage and in the meantime, hedgehogs can execute faster
    • Nobody can be both a fox and a hedgehog - you only have a limited amount of time and computational capacity
    • The big challenge, however, is to avoid falling into the degenerate forms of foxes and hedgehogs
  • Dissolute Foxes and Hidebound Hedgehogs
    • Strong views, strongly held: dogma
      • Cognition without meta-cognition
      • Elevation of fundamental beliefs to unfalsifiable sacredness
      • Hedgehogs that never change their world-view
    • Weak views, weakly held: wishy-washy bullshit
      • Meta-cognition without cognition
      • Ephemeral engagement of ideas in purely relative terms
      • Foxes that become unmoored from any kind of ground truth, and become incapable of telling truth and falsehood apart
  • Bullshit detection
    • Foxes are bullshit-resistant
      • Agnostic to detailed truths of any domain
      • If they build upon a false fact, that just causes a local revision
    • Hedgehogs detect bullshit
      • When encountering someone with an ostensibly opposed worldview hedgehogs have to try to see how coherent the opposed world-view is
      • If the other world-view doesn't meet their standard for coherence, then hedgehogs suspect that they're dealing with an insincere opponent
  • My thoughts
    • I think foxes have advantages and disadvantages at different stages of the OODA loop
    • Foxes have an advantage in the observe and orient stages
    • Hedgehogs have an advantage in the decide and act stages

2017-12-11 RRG Notes

The Bummer Economy

  • Doing any kind of work is kind of a bummer
  • So why do people do any kind of work at all?
  • The basic law upon which the science of economics is built is that people respond to incentives
  • Money doesn't need to be "real" to be real currency
    • Money isn't a tangible thing
    • Money is a story
    • Money is a mutual promise by a group of people to do things
    • Currency is the way we keep track of these promises
    • A $10 bill a promise for $10 of stuff
    • That $10 is backed by the promise of the US Government allowing you pay your taxes in dollars
    • David Graeber goes into much more detail on this in Debt: The First 5000 Years
  • Money works because people believe that promises to exchange dollars for goods and services will be kept in the future
  • Any group of people who trust each other can establish their own currency and it will be just as effective
    • And they have! Again, from Graeber:
      • On Macau, through the 1800s the casinos established an alternative currency consisting of their gambling chips
      • In Medieval and Renaissance England, most people didn't transact in gold or silver - they used leather tokens issued by local shops
    • In fact, standardized, universal, country-wide currency is a very recent invention and for the vast majority of human history, people used local ad-hoc currencies
  • Bummer points:
    • Chores were assigned differing amounts of "bummer points"
    • Each person's bummer points were assigned in a publicly visible table
    • Anyone could volunteer for a chore to earn bummer points
    • If more than one person volunteers, the volunteer with least amount of BPs got the job
    • If no one volunteered, the person with least amount of BPs was assigned the chore
    • Couldn't opt-out because this was a military barracks
    • The relative popularity of various chores allowed the establishment of effective pricing
    • Increase the payout of the chore until one or two people volunteered for every chore
    • The establishment of a leaderboard exerted social pressure on people to volunteer for chores
    • Allowed people to load balance their work - if you had exams, you could do extra chores ahead of time to build up a buffer and skip chores
    • Allowed people to leverage comparative advantage - people volunteered for the chores they enjoyed most relative to the payout they got
    • Free-trade economy sprung up - people traded personal favors (like laundry work) in exchange for BPs
  • The simple trick of introducing a currency resulted in miraculous results
    • Every chore was done by the person who hated it the least
    • People came up with new ways of helping each other
    • Publishing the list of chores turned into a fun event
  • Money is magic and you don't need to be a central banker to be a wizard

Familiar Finance

  • Most online financial advice is unanimous that loaning money to family or close friends is a bad idea
  • All the articles talk about the downsides of loaning money from or lending money to someone you know closely in an anecdotal way
  • However, they don't mention the downside of not borrowing from family or close friends
  • Example:
    • Friend had $160,000 in student loans at 7% interest
    • His parents had a similar amount in treasury bonds at 2% interest
    • By not lending the money to their son, friend's parents were forgoing roughly $8000 in interest money every year
  • Jacob's dad, meanwhile loaned him money at prevailing US treasury rates
  • Jacob borrowed the money, graduated from business school, got a job and paid his parents back
  • When loaning money to people, make it clear that it's not a gift
  • A loan should be a mutually beneficial investment
    • Always charge interest
    • Set a clear repayment schedule
    • Set provisions for early repayment and emergency deferral
  • The main fear is that if your friends don't pay you back, you lose both the money and the friendship
    • Don't treat lending money as a favor
    • Treat it as an investment and analyze the risk as if it were any other investment
    • Never loan money that you can't afford to lose
    • Always be mentally prepared to lose the money the moment the loan is given
    • Specify a clear contract to take the emotion out of it
  • Charging friends interest isn't exploitative
    • The loan can only exist if it's mutually beneficial
    • Charge them a non-zero interest rate that's lower than the interest rate that they would be offered from a financial institution
    • You can do this because you have more information about how trustworthy they are than a bank
      • Do you? Most people aren't very open about their financial situations, even with their friends
    • You know your friend's habits and plans, and you have a better sense of how motivated they would be to repay the loan
  • So does this mean that internet financial advice is wrong?
    • Nope, it's almost completely correct
    • Lending money requires dealing with money in a way that's honest, emotionally mature and financially sensible
    • Most people don't treat money this way
    • If you or your friends can't treat money sanely, then keeping money away from personal relationships is good advice

Time Well Spent

  • A popular topic among rationalist writers is akrasia
    • Akrasia is whatever in your head that prevents you from doing the stuff that you wish you would do
    • Procrastination
    • Lack of willpower
    • Poor self control
  • With so many people thinking about and writing about akrasia, why does it still exist?
  • Productivity advice comes in two flavors:
    • Scientific research on human motivation
      • Rigorous, but difficult to turn into actions or plans
    • Tips that read like horoscopes
      • Fun, but often impracticable or contradictory
  • Beeminder
    • Model of akrasia is that it's a coordination problem between current you and future-you
    • Beeminder brings your future and current selves into alignment by tracking cumulative goals
    • Tracks long-term progress with a medium-term buffer
    • Example:
      • Run 500 miles in a year
      • Beeminder would break that up into 10 miles per week
      • Gives you flexibility in how you break up your weekly goals
    • Penalizes you financially if you forget your goals
    • Penalties ramp up until they're painful enough to get you to do the thing
    • Prevents you from weaseling out of goals by forcing you to write them with excuses in order to get out of the assessed penalty
    • Also supports automatic data import from other apps, so that accountability can be maintained automatically
  • Jacob started using Beeminder to track a multitude of habits
  • But after a while logging things in Beeminder became a chore in itself
  • So he decided to create a single index that would capture his goals, fitness, education, social life and future plans and hook that up to Beeminder
  • Put down all of his activities on a spreadsheet, with the following categories
    • -2 points: every 1/2 hour spent at a screen after midnight, or awake for any reason after 1am
    • 0 points: filler activities (commute, sleep, reading blogs, etc)
    • +1 point: reading books, walking (positive things that he'd be doing anyway)
    • +2 points: important but things that would get done anyway because of outside pressure (job stuff, cooking) or because he enjoys them that much (time with fiancee and friends, sports)
    • +3 points: important activities whose benefits are too long term to be immediately motivating (gym, self-study, writing blog)
    • +4 points: 1/2 hour doing an activity that primary helps others (volunteering, helping friends)
  • Baseline of 30 points per day forces him to spend some time productively every week
  • Forces a weekly review (widely seen as a productivity hack)
  • Should you use this system?
    • Probably not
    • You have to be the sort of person who enjoys keeping detailed time logs
    • But you should keep mucking around until you find a system that works for you

Strong men are socialist, reports study that previously reported the opposite

  • Study claims that strong men are more likely to be right-leaning, whereas weak men are more likely to be left-leaning
  • Study claims that it sampled a range of people between 18-40
  • In reality 98% of the study was students, with median age 21 and a single 40-year old male
  • The problem with evo-psych stories is that it's easy to come up with a story that explains any conclusion
  • Study shows signs of extreme p-hacking
    • Two models
    • Two outcome variables
    • Six predictors
    • Several controls
    • Several interactions
    • With that many hypotheses, it's virtually guaranteed that one of your hypotheses will be significant at p<0.05
  • Authors don't correct p-value for the multiplicity of hypotheses tested
  • In fact, it's possible to rewrite the article to prove the opposite of the researchers' hypothesis (that strong men are more likely to lean socialist) without editing any of the data
  • The power of multiplicity is that it can "prove" any hypothesis you want it to prove

2017-12-04 RRG Notes

Living In An Inadequate World

  • When doing inadequacy analysis, we need to draw a distinction between wrong guesses and false cynicism
  • If people truly don't care about the outcome of a bad equilibrium, they won't make extraordinary efforts to break out of that equilibrium
  • Even inadequate systems only have a finite amount of failure
  • Sloppy cynicism will usually be wrong, because it will explain too much - i.e. the system isn't as broken as a completely cynical view would have it be
  • Seeing inadequacy everywhere is the same as seeing inadequacy nowhere
  • The point of learning inadequacy analysis is to give yourself permission to try novel strategies, while having an understanding of whether those strategies are likely to work
    • Break blind trust in institutions
    • But also learn when institutions are likely to be right
  • Three Step process:
    • Realize that there are exploitable strategies that haven't already been used
    • Calibrate until you're not always seeing exploitability or inexploitability
    • Fine tune against reality
  • Eliezer finds out that medical competence is high-variance
    • You can't blindly trust your doctor
    • You can't blindly trust yourself either
  • When we think about inadequacy, we're deciding whether we trust society to be more or less competent than we are
  • The modest viewpoint turns competence into a social status game
    • But our beliefs shouldn't depend on what sort of person we are - our beliefs should depend on what the world is actually like
    • Modest people end up believing that they live in an inexploitable world because they're trying to avoid acting like an arrogant person
    • The true alternative to modest epistemology isn't an immodest epistemology where you decide you know better than society on all questions, the true answer is to decide for yourself on a question-by-question basis
  • Trust, but verify
  • Realize that a system as a whole will often perform worse than any individual within the system due to misaligned incentives and communication overhead
  • It takes far less effort to identify a correct expert than it does to become a correct expert
    • Not easy, but possible for amateurs to do
    • There is no shortage of contrarians whose ideas are better than the mainstream
    • It's possible to know things that the average authority doesn't know by learning from the best, rather than the mediocre
  • When looking for exploitability, pick your battles
    • Even when you've found a problem that you think is exploitable, you might be wrong
    • Don't go after the first inadequacy you see
  • Coming up with a brand new model is something that you'll be lucky to do once or twice in a lifetime
  • Coming up with a new synthesis of pre-existing ideas is something that you'll see once or twice a year
  • Picking sides between experts when you can follow their arguments is something that you ought to be able to do quite frequently
  • Fortunately, for most day-to-day decisions, the latter is quite sufficient
  • To improve everyday thinking about inadequacy
    • Update hard every time you come across new data
    • Don't worry so much about overcorrecting, because you'll often quickly receive additional data that will prompt you to correct back
    • Bet real money on everything - the sting of losing money helps you learn

Blind Empiricism

  • Being a "fox" shouldn't preclude you from having overarching theoretical frameworks
  • You have to be ready to say that those frameworks were in error and update
  • Having a theory doesn't lock you into being insensitive to evidence
  • The ideology of empiricism is harmful when it blocks you from making hypotheses
  • Being cognizant of the outside view is useful, but you have to make sure that you're truly comparing comparable things
    • If you have a new product, it may or may not be applicable to consider related products
    • Your market and customers may or may not be exactly the same as the market you're using as a baseline
  • In truly novel situations, the outside view usually fails
  • Moreover, in many ambiguous situations, it's not actually clear which "outside view" is correct - a new product or situation is often comparable to multiple reference classes
  • In many cases, the outside view can't compete with a good model
    • Most evident in the sciences
    • Physicists build mathematical models of particles - don't say that particle x is like particle y, therefore it should behave the same
  • You need both the ability to make theories and the ability to abandon those theories when they're proven wrong by evidence

2017-11-27 RRG Notes

Authentic Signals

  • To say that people do things for signalling value is not contradictory with people doing things because of authentic desires
  • People draw a distinction between signalling with conscious intent and signalling with as a secondary cause of authentic desire
  • Society disapproves of conscious signalling - looking good is okay, but if you do something just because it makes you look good, that's not okay
  • However, the consequences of a behavior are the same, whether it was done with conscious intent or not

The Best and the Brightest

  • Allegorical story describing the educational process and the process of getting a job and finding success as a series of levels in a game
  • Start with "The Maze of Tests" - grade school
    • Advance by giving the right answers
    • Figure out that the maze is designed by other people and that you don't "really" need to understand the content of the tests, if you can predict what the person designing the tests is looking for
  • Move on to the "Swamp of College Admissions"
    • Need to hire a "test prep guide" in order to do well on one of the three-letter tests
      • Do test prep guides actually work? Studies show that they're of dubious benefit
      • Going to a test prep tutor might be more about reassurance and social signalling than anything else
    • Come up with a good story about how you're benefiting society
  • Once in college, you need the "magic smoke of the right names" in order to get a good job
    • Interning at a well-known company means you'll get good offers even if you didn't really do any work
    • Need to put the right names on your resume
      • This is the part of the essay I disagreed with the most
      • Completely contradicts my lived experience - I didn't go to a "good" school; I didn't have any of the right names, and yet Amazon still reached out to me
      • Yes, who you know matters more than what you know, but what you know determines who you know
      • Also disagree about "you will do very little real work" - the interns at both Amazon and Microsoft were doing real dev work
  • Then, in order to move up in the world, you need the "magic rope of the right relationships"
    • Find your higher-ups' deepest fears and insecurities
    • Exploit them, so that you become indispensable
    • Wait as your superiors do the work of moving you up
  • The Inside Game
    • The "insiders" have a fast-track to the top
    • They're the ones that control the game
    • Once you start playing the game you can't stop
      • You can't? Are you sure about that?
    • Once you play the inside game and get to the top, you realize that there are no right answers or true stories - it's all made up by the people who've advanced farther than you in the game
  • In which a bitter rationalist doesn't really understand the real world and how to succeed in it
  • This entire essay is a combination of over-cynicism and impostor syndrome
  • Especially the part about getting job offers - sorry, but in Silicon Valley it doesn't matter whether you went to Harvard or you went to Podunk University - you go through the same interview process, answering the same questions

Play in Hard Mode

  • "Hard Mode" defends against Goodhart's Law
  • Hard mode is about working for your values, even when it would be easier to go along with what society expects
  • Hard mode is about making yourself better, even if that results in slower improvements in externally visible metrics
  • Hard mode is about being true to yourself

Play in Easy Mode

  • Easy mode is analyzing systems and finding the most efficient path to your goal
  • Easy mode strategies can fall victim to Goodhart's Law, where you lose sight of your end goal and instead become sidetracked by the proxy metric you're trying to maximize
  • Easy mode lets you get to your goals faster, but playing in easy mode doesn't make you a better person
  • Easy mode is "selling out"
  • Both easy mode and hard mode have their uses
    • You gotta know when to hold 'em, know when to fold 'em
    • Know when to walk away and know when to run
    • You never count your money when you're sittin' at the table
    • There'll be time enough for countin' when the dealin's done
    • --Kenny Rogers, "The Gambler"

Half-Assing It With Everything You've Got

  • Guilt and shame are unhealthy long-term motivators
  • If you want to be highly effective, know what your true goals are
  • Work towards those goals, and put in the minimum amount of effort to succeed elsewhere
  • Pick a target and hit it, knowing that achieving higher than the target quality is as bad as achieving lower than target quality
  • If you want to spend maximum effort, set the goal really high, and then work hard towards hitting it
  • If you're a perfectionist, be a perfectionist about the process rather than the result
    • Optimize towards achieving your goal with a minimum of effort, rather than getting a perfect outcome
  • Thinking about a particular quality goal and hitting with a minimum of wasted motion breaks you out of the false dichotomy between being a slacker and a tryhard

2017-11-20 RRG Notes

Inadequacy and Modesty

  • When should you believe that you can do something unusually well?
  • How do you distinguish between a bad idea and an insufficiently explored good idea?
  • Modest Epistemology
    • Example: Eliezer noticed that the Bank of Japan's monetary policy wasn't creating enough money
    • Why should we believe Eliezer?
    • Even if Eliezer cites experts who are on his side, why should be believe those experts rather than the experts on the other side?
    • Why shouldn't we go with the wisdom of the crowds?
  • The wisdom of the crowds (or the efficient market hypothesis) only applies in a very limited set of circumstances
    • Payouts conditional on actual performance
    • Lots of data
    • Fast feedback loop
    • The process of exploiting an inefficiency corrects that inefficiency
    • If these preconditions aren't met, then you can have more confidence in your ability to beat the market or conventional wisdom
    • If there's no way to make money from something being broken, that thing will remain broken
      • Going back to the Bank of Japan example - even though many people knew that the Bank of Japan's monetary policy was suboptimal, there was no way for them to make money from the fact
      • There was also no way for them to take over the monetary policy and reap rewards from the increased economic growth
      • Thus, Eliezer is more justified in criticizing the Bank of Japan's monetary policy, since the process by which that policy is decided does not have the preconditions of an efficient market
  • We encounter situations in which the wisdom of the crowds doesn't apply in our everyday experience
    • Eliezer found that a good treatment for Seasonal Affective Disorder was simply more light
    • But why should he have believed that more light ought to work?
    • There were no studies showing that more light would have worked
      • He noticed that vacationing in Chile was an effective treatment
      • He noticed that traditional lightboxes did not work
      • He reasoned that the lightbox wasn't producing enough light - and that putting up a bunch of 60w LED bulbs with the appropriate color temperature would work
    • So given that this was a fairly simple remedy, why weren't there studies that examined using more light as a treatment for Seasonal Affective Disorder?
    • Humanity is not being maximally opportunistic about studying Seasonal Affective Disorder in the same way that it's opportunistic about taking advantage of arbitrage opportunities in the stock market
  • If you want to outperform society, you should look for places where society isn't being maximally opportunistic and focus your efforts on those areas

An Equilibrium of No Free Energy

  • We need to distinguish between three concepts:
    • Efficiency
    • Inexploitability
    • Adequacy
  • Efficiency
    • Efficiency merely means that you can't reliably predict the movement of a price in a market without additional information
    • Doesn't mean the price is just
    • Price fluctuations don't mean a market is inefficient - fluctuations are the mechanism by which the market integrates and communicates new data
      • The problem with taking this view of the efficient market hypothesis is that you're assuming that there aren't systematic biases among market participants
      • Moreover, efficient markets assume that everybody has unlimited capital, margin calls aren't a thing, etc.
      • In practice, irrational noise traders can earn higher returns than "rational" investors, in a situation where all parties have limited capital and the noise traders outnumber the rational investors
      • In short, "the market can stay irrational longer than you can stay solvent"
    • In a liquid market, if you can predict that prices will move in a particular direction, you can make money off that fact and your making money helps correct the distortion
      • See above - even in the most liquid market on earth, the American stock market, systematic mispricing can persist, due to the noise trader effect from above
    • The efficiency of a market is relative to your own intelligence - a market may not be efficient from the perspective a superintelligent algorithm, but it is efficient from your perspective
      • The efficiency of a market is relative to your own intelligence and capital - if you don't have any money, you can notice market inefficiencies 'til the cows come home, but you can't do anything about them
  • Inexploitability
    • Not all things that involve money are efficient
    • The housing market has a lot more money in it than the stock market, but it's not as efficient
    • It's not easy to make money from housing prices going down - no analog to shorting stock
    • Therefore, even as a non-specialist, it's possible that you might be able to guess at which way the price of a house is going to move, which can be helpful if you're buying a house
    • Therefore the housing market is inefficient - you can predict which way prices are going to move - but inexploitable - you can't make money from that movement
    • It's the inexploitability that allows price distortions to persist
  • Adequacy
    • Adequacy is whether the low hanging fruit have been plucked in a given domain
    • Example: seasonal affective disorder
      • Eliezer found that hanging a bunch of LED lightbulbs up was an effective treatment for seasonal affective disorder
      • So why aren't there companies selling wall-size LED arrays as treatments for seasonal affective disorder
        • You can't just throw some 2x4s and a bunch of LED lightbulbs in a box and sell that as a product
        • There's a huge jump from building something as a one-off project for yourself to scaling that up and selling it as a repeatable, cost-effective solution that's usable and installable by ordinary people in a wide variety of situations
    • So how does inadequacy arise?
      • Inadequacy arises when the incentives in a system aren't what you think they are
      • Example: academic research
        • Incentives for researchers are to publish in high-impact publications that get lots of citations
        • Incentives for grantmakers are to fund research that gets a lot of prestige
        • Neither side is directly interested in improving knowledge - that's something that happens as a side effect of the process
      • It's not sufficient for one part of a system to be broken - there have to be two parts, so that a single individual can't fix the system without a massive amount of resources
    • In the same way that inefficient markets are inexploitable, inadequate systems are unfixable
  • Inadequate systems and markets both share the property of being in competitive equilibrium
    • People are acting according to the incentives of the system, not according to what you think their incentives are
    • It's the lack of "free energy" in the system that prevents the system from changing
    • If people in the system had the resources to change their incentives, the system would no longer be inadequate
    • But because the system is in competitive equilibrium, any attempt to change the incentives of the system puts the person attempting to make the change at a severe competitive disadvantage
  • In fact the concept of inadequacy highlights an inadequacy
    • If economics were adequate to applying the concept of free markets to other fields it would have come up with the notion of adequacy
    • But it hasn't, so therefore economics is inadequate for discussing inadequacy
      • Except that it isn't. Inadequacy is a really fancy word for "market failure". Or maybe a Nash Equilibrium.
  • When Eliezer found that stringing up lots of LED bulbs was an effective treatment for seasonal affective disorder, he didn't go around trying to get researchers interested in it
    • It's an obvious enough idea that if it were easy to research some researcher would have already studied it
    • Therefore, the fact that it has not already been studied means that the system for researching treatments for seasonal affective disorder is inadequate
    • And because it's inadequate, it's impossible to fix without enough resources to single-handedly change the incentive structure
    • The interesting form of inadequacy is when everyone seems to know that the system is inadequate, but no one seems to have enough power to change the system
  • It's very easy to come up with an a priori inadequacy argument about anything
    • Before you conclude that the system is inadequate for solving the problem you're thinking about, do some research and see if the problem has been studied
    • An inadequate system is one that doesn't even look at the problem, not one that hasn't solved the problem yet, but is actively working on it

Moloch's Toolbox

  • 3 categories of concepts for analyzing "adequacy"
    • Decisionmakers who are not beneficiaries (i.e. principal-agent problems)
    • Asymmetric information
    • Systems being stuck in suboptimal Nash Equilibria
  • The real problem is when you have multiple "inadequate" systems reinforcing each others' inadequacies
    • Example: Parenteral nutrition for premature babies
      • Premature babies born with short-bowel syndrome need intravenous feeding while their digestive systems finish developing
      • It's best if the formula that these babies are given contains Omega-3 fatty acids
      • However, the FDA approved formula contains mainly Omega-6 fatty acids, which is possibly harmful
  • So why haven't we updated the formula to contain Omega-3?
  • We need a large scale study to prove that Omega-3 is more beneficial than Omega-6
  • The problem is that this large scale study would be a replication, and the current scientific incentives aren't aligned to make replication a priority
  • Prestigious journals demand conventional scientific and statistical methods, so scientists design their studies so that they're publishable
  • All of this is a result of inferior Nash equilibria
    • Not all Nash equilibria are identical in terms of utility
    • It's possible for some to be strictly worse than others
    • Many of our civilizational problems can be explained by the fact that we're stuck in inferior Nash equilibria
    • There's no global administrator choosing the Nash equilibrium that a system ends up in
  • Total Market Failures
    • Example: a doctor today does what ought to be three or four separate jobs handled by three or four separate people
      • Eliezer has never heard of transaction costs, has he?
    • The reason the medical profession hasn't split apart the role of "doctor" is because of regulatory capture
    • Regulators have been captured by interest groups like the AMA, who have an interest in keeping the medical profession high status
    • Publishing statistics would lead to a first mover disadvantage, so no hospitals publish statistics, nor prices, as the first hospital to do so would be hurt
  • So why don't the people who know better go off and set up their own political entity and see if they can do better?
    • Existing governments have a monopoly on land
    • This monopoly is defended with force
  • The fact that a system is in equilibrium doesn't mean that the equilibrium is good
  • These equilibria may persist even though everyone wants changes, the process in question is a multi-stage one and success at the current stage of the process is dependent on acceptance by the next stage
  • Something that's effective but unconventional at the first stage will never come to pass if it's rejected by the second stage in a multi-stage process
  • So why don't people change their political systems to allow for more experimentation?
  • Political systems are defined by the same inadequate Nash equilibria as everything else
  • First-past-the-post-voting is a result of the same kind of inadequate equilibrium as everything else
    • Eliezer is missing two big things: complexity and legitimacy
    • Condorcet voting may have better outcomes, but those outcomes will be harder to explain, and, as a result, will be less legitimate
    • Heck, the Electoral College is bad enough in this regard, even though it serves an arguably important check on the power of large, urbanized states like New York and California
  • Another example of inadequate equilibria is Overton Windows
    • Everyone is ready to talk about a particular change
    • But because no one has started talking about it, everyone feels like it's taboo
    • Then someone unusually brave starts talking about it and it opens up the field to it being a legitimate topic

2017-11-13 RRG Notes

All in All, Another Brick in the Motte

  • The original use of motte-and-bailey is intended as a critique of postmodernism
    • Postmodernists start out by saying that our perception of reality is influenced by the categories and prejudices of our society
    • They then follow this up by saying that beliefs that are at odds with scientific evidence are just as valid as beliefs upheld by scientific evidence
  • The originators of the motte-and-bailey doctrine compare this approach to a medieval motte-and-bailey castle
    • Field of desirable and productive land - bailey
    • Tower in the middle - motte
    • Economic activity goes on in the bailey
    • When threatened, retreat to the motte until the threat goes away
  • Examples
    • Religion
      • Religious group acts like God is a singular supernatural being, capable of performing miracles - bailey
      • When confronted, they say that "God" is a word for the beauty and order in the universe
    • Feminism
      • Some feminists say that you can't be a "real" feminist without supporting a controversial policy
      • When confronted, they retreat to saying that feminism just implies women deserve the same rights as men
      • When the threat goes away, they go back to advocating for the controversial policy
    • Pseudoscience
      • Proponents of pseudoscience will claim that their quack remedies are the cure for all sorts of ailments
      • When confronted, they say that they're just trading on the placebo effect
    • Rationality
      • Rationalists push all sorts of complicated concepts like Bayesian decision-making and and utilitarian ethics
      • When confronted, they retreat to the claim that rationality is whatever helps you achieve your goals
    • Singularitarianism
      • Claim that there is an imminent AI explosion
      • When confronted, retreat to claiming that at some point in the future technology will be too weird to predict
  • Motte-and-bailey is a perfect mirror-image of the weak-man fallacy
    • Weak-man is like a straw-man, only you're taking a non-representative extremist position to be representative of your adversary
    • Motte-and-bailey replaces a weak but representative position with a strong but non-representative position
  • Both motte-and-bailey and weak-man result from people's tendencies to debate vague clouds of beliefs rather than specific beliefs
  • To get around them, taboo vague words and replace symbols with substance
  • Have an actual thing that you're trying to debate

On Exposing Hypocrisy

  • Hypocrisy is when an action is ostensibly about advancing belief X, but is truly about advancing belief Y
  • What are some possible responses to hypocrisy
    • Do nothing
    • Ask the person privately about your suspicions while supporting their efforts
    • Confront the person privately and act mildly offended
    • Expose the hypocrisy publicly, acting deeply offended
    • Ask a mutual friend to ask the other person about the hypocrisy
  • If you like the person that's behaving hypocritically and want them to be less of a hypocrite, what's the best approach?
  • The safest approach is to do nothing
  • Exposing hypocrisy publicly is probably the worst approach - likely to make the hypocrite do the opposite of the thing you want them to do
  • The problem with Effective Altruism is that by exposing hypocrisy, it's equally likely to make people give less to charity as it is to to make them give more
  • We should be careful about exposing hypocrisy until we better understand the effects of exposing hypocrisy

Be Comfortable With Hypocrisy

  • Why do we place so much stock in self-consistency?
    • Self-consistency is a measure of reliability - if I don't think that you'll believe the same thing tomorrow that you did today, then why should I trust you with anything?
  • Placing too much emphasis on consistency led The_Duck to completely ignore animal rights, on the grounds that anyone who ate meat could not discuss animal rights
  • We can either back down from high moral ideals or be more comfortable with hypocrisy

Ditch The Word Hypocrite

  • Calling someone out for hypocrisy is too meta
    • Completely ignores the truth or moral value of their actual argument
    • Hypocrisy is a valid claim to make regardless of the actual facts
    • If someone is saying P while acting as if !P is true, one of their beliefs is false, so just attack the false belief
  • Charges of hypocrisy discourage updating and nuance
    • The best way to avoid hypocrisy is to say nothing substantive at all
    • Knowing that you can be called out for hypocrisy prevents people from changing or updating their views
  • Calling a group out for hypocrisy reduces the intellectual diversity of the group
    • Every group will have a range of opinions
    • If there are two members whose opinions diverge sufficiently, the group as a whole can be called out for hypocrisy
    • Prevents people from associating with people who hold less defensible versions of their views
  • Ambitious goal-setting and self-improvement can look like behavioral hypocrisy
    • Accusing people of hypocrisy just encourages low standards
    • It's hard to talk about what virtues you want your community to have without pretending to have those virtues, even if you don't have them yet

2017-11-06 RRG Notes

Gears in understanding

  • When looking at a model, it can be useful to ask how deterministically interconnected the variables of the model are
  • How do you know whether your model has this property?
    • Is the model falsifiable?
    • If the model were falsified, how much and how precisely can you infer other things from the falsification?
    • How incoherent is it to imagine that the model is accurate but a variable could be different?
    • If you knew the model were accurate, but you forgot one of the variables, could you rederive it?
  • Example: gears in a box
    • If you have gears in an opaque box, then you can hardly make any predictions about the system, and your predictions aren't well constrained
    • If you can see the gears in the box, then you can make many more predictions and learn more about the system when your predictions are falsified
  • Example: Arithmetic
    • There's a tension, in practice, between memorizing and drilling algorithms that let you compute answers quickly and "really understanding" how those algorithms work
      • Is there really, though? Maybe you just don't spend enough time memorizing and drilling.
    • This tends to devolve into debates about what it means to truly "understand" a mathematical algorithm or concept
    • Valentine is proposing that we should define understanding in terms of how well the student knows the component parts of the model and how well the student is able to re-derive missing parts if they forget
  • Example: Valentine's Mother
    • If Valentine tells you that their mother likes history, that's just a random, unconnected fact in your mind
    • But for them, that fact can be derived from their mother's other interests
    • A world in which everything else was the same, but their mother did not like history would be confusing for them in a way that it wouldn't be for us
    • Their model of their mother has more deterministically interconnected pieces than ours does
    • Getting to know someone can be interpreted as "adding gears" to our model of them
    • Models can be viewed on a continuum between collections of unrelated facts and axiomatic systems where everything is a logical deduction
  • Example: Gyroscopes
    • Most people think that gyroscopes behave in a non-intuitive fashion
    • For most people, it seems coherent to imagine a world in which physics works exactly the same way, except that when you suspend one end of a gyroscope, it falls like a non-spinning object
      • Well, yes, but that's because most people's day-to-day experience of physics doesn't include relatively high angular momenta
      • When you look at a still object and a spinning object, it's not immediately obvious why spin should matter in the object's behavior
      • I really wish that rationalists would stop taking things that aren't intuitive and saying that they ought to be intuitive
    • The nice thing about physics is that everything is interconnected
      • Falsifying any part of physics leads to updates about all of the other parts
      • It is incoherent to imagine all of physics as it currently is, with one or two parts working completely differently
    • The interconnectedness and determinism of the model tracks a true thing about the world, which is why we want our models to be more interconnected and deterministic
  • Gears-ness is not the same as goodness
    • Interconnectedness should not substitute for accuracy - if you have a model that's more vague but also more accurate, go with that
    • Models should also inspire ways of understanding that are useful
    • Most of our models don't connect all the way down to physics, so it's all right not every part of your model can be strictly derived from axioms
  • Insisting that models be interconnected and deterministic is a powerful tool for cutting through confusion
  • My thoughts
    • Why do rationalists come up with new terms for things? Why can't we call "gears-ness" "determinism" or "interconnectedness"?

In Praise of Fake Frameworks

  • Valentine uses a lot of "fake frameworks" - models that are probably or obviously wrong in some way
  • Examples of fake frameworks:
    • Extroverts and introverts
      • Extroversion and introversion aren't words that refer to clear traits
      • Realize the intuition is wrong, and then use it anyway
        • Is this actually possible to do?
      • Put the intuition in a mental sandbox and see if it comes with any useful predictions, even if it's wrong
    • MBTI personality types
    • Without the intuition that led to folk frameworks for personality no one would have done the cluster analysis that lead to the OCEAN personality traits
  • Ontology is a set of "basic" things that you use to build a map or a model
    • "Point", "line" and "plane" in Euclidean geometry
    • "Mass", "position" and "time" in Newtonian mechanics
    • People get confused when they switch ontologies without noticing
    • A road is a basic unit in the ontology of street map
    • Talking about how roads are made of atoms is true, but also unhelpful when attempting to navigate
    • In the same way, MBTI can be helpful as a high-level approximate abstraction, even though it breaks down at some level
  • It's a type error to say that an ontology is "correct" or "wrong"
    • Ontologies are basic elements from which you build models
    • It's the model that's correct or wrong, not the ontology
    • The concept of ki in aikido can be useful even if it has no single correspondent in physics
  • It should be possible to try on ontologies without hurting your core belief system
  • My Thoughts
    • Again with the terminology. Why shouldn't we call it "ontological flexibility" instead of "fake framework"?
    • Rationality is nowhere near popular enough to get away with starting its own terminology without starting to seem like a cult

The MTG Color Wheel

  • Magic: The Gathering (MTG) colors form an ontology of personality types and motivations
  • MTG has 5 colors, each with its own characteristics
    • White - peace through order
      • Tries to achieve peace by imposing order
      • Coordination, cooperation and restraint are the solution to all the unhappiness in the world
      • Angels, clerics, knights
      • Asks what is the right course of action to take, where "right" is determined by their moral or cultural framework
      • Archetypal organization: church
      • Dystopia:
    • Blue - perfection through knowledge
      • Seeks perfection, and attempts to achieve that perfection through knowledge
      • Figure out the truth and apply that truth to the fullest extent
      • Change the rules, rather than just applying them
      • Asks "What course of action makes the most sense?" Sense is determined by careful thought or expertise.
      • Archetypal organization: university or research lab
      • Dystopia: efficiency pursued without morals or limits
    • Black - satisfaction through ruthlessness
      • Desire for power and agency
      • Ability to reshape the world around it
      • Capable of cooperation and alliance, but only consequentially
      • Amoral, not immoral
      • Asks, "What will leave me best off?"
      • Archetypal organization: hedge fund
      • Dystopia: Totalitarian dictatorship
    • Red - Freedom through action
      • Seeks ability to live in the moment
      • No real organization
      • Dystopia: anarchy
      • Asks, "What do I feel like doing?"
    • Green - Harmony through acceptance
      • Most of the suffering comes from trying to fix things that aren't broken
      • The color of Chesterton's Fence
      • Asks, "How are things usually done? What is the established wisdom?"
      • Archetypal organization: hippie commune
      • Dystopia: tribe with rigid and unchanging traditions
  • In addition to being defined by goals and methods, the colors disagree with each other in meaningful ways
  • White prioritizes the group over the individual
  • Black does the reverse
  • Green sees the environment as something to be cherished and preserved
  • Black sees it as something to be exploited
  • Green sees genetics and environment as determinative, blue believes in overcoming and transcending one's origins
  • Blue sees red as impulsive and rash, while red sees blue as repressed and ufeeling
  • Red and white disagree on structure and commitment
  • Colors also can ally interesting ways
  • White and blue both agree that structure is important
    • A white/blue agent asks how do we know what's right and good?
  • Blue and black both agree on personal growth - transcend social roles and social norms
    • A blue/black agent asks how best they can achieve their goals
  • Black and red both agree that independence is something to be fostered and defended
    • A black/red agents asks how they can get what they want
    • Embraces hedonism and "live-and-let-live"
  • Red and green agree on the importance of authenticity
    • A red/green agent asks where they are now, and where should they go?
    • Being present in the moment
  • Green and white agree that a whole is greater than the sum of its parts
    • A green/white agent asks what is fair and good
  • Colors can combine with their opposites in interesting ways as well
  • Black and white combine to form tribalism
    • Asks who's in their circle of concern
    • Scarcity mindset
    • Progress is a zero-sum game
  • Blue and red form creativity
    • Freedom combined with investigation
    • Wild artistry and mad science
    • Asks what can be achieved and what might be possible
  • Black and green both embrace the cycle of death and rebirth
    • Asks what costs must be paid to achieve the ideal
    • Belief in the virtue of evolutionary struggle
  • Red and white are the colors of heroism
    • Asks what needs to be done and what would a good person do
    • Morality and adherence to laws that may be higher than the laws of the society that one is in
  • Blue and green are the colors of truth-seeking
    • Asks what don't they understand?
    • Pursue knowledge, but disagree about what should be done with that knowledge
  • So what do we do with the color wheel?
    • Classify things and then make predictions based upon those classifications
    • Colors give you a set of associations that allow you to bias your predictions in useful ways
    • Remember - colors only speak to goals and means; every color can be good in some ways, and evil in others
    • Colors can help you understand how and why people do things, and thus predict what they'll do next
    • For example, orderly, structural interventions against someone who's primarily red is going to feel stifling to them and cause them to rebel
    • Recognize that every color is a different frame by which to interpret the world, and use the colors to better see how someone else will see what you're doing
  • My thoughts
    • This is going to be hard to keep in your head if you're not a Magic The Gathering player
    • Useful shorthand, otherwise

On Types of Typologies

  • Meyers-Briggs Type indicator
    • It's inconsistently applied and unscientific
    • Yet, Scott consistently recognizes himself in the description of INTJ
  • We can reconcile "Meyers-Briggs is unscientific" with "Meyers-Briggs is a useful tool"
  • Meyers and Briggs had no scientific basis on which to declare that there were four factors of personality - just armchair observations and anecdotes
  • Factor analysis shows that there are actually 5 personality traits, only one of which corresponds to Meyers-Briggs
  • Meyers-Briggs doesn't need to give new information in order to be useful
  • Meyers-Briggs can be useful for drawing conclusions in the same way that nationality can be
    • Isn't that just plain stereotyping
    • Even if it isn't stereotyping, you can't analogize between nationality and MBTI because nations impose cultural norms in a way that MBTI does not
  • Five-Factor and MBTI are trying to do different things
    • Five-Factor is trying to give a mathematical, objectively correct breakdown of personality useful for research purposes
    • MBTI is trying to separate people into categories that are easy to think about
    • Even poorly drawn categories can be useful
  • My Thoughts
    • It seems to me that Scott is defending stereotyping on the basis of pseudoscience
    • I mean, if we're going to split people up and make pre-judgements about their personality on the basis of things that don't correspond to existing scientific truths... why not judge people by their skin color? Or their astrological sign?

2017-10-30 RRG Notes

For Signalling (Part 1)

  • Signalling is about showing off
  • The whole point of signalling is to have costs
  • Wearing an embarrassing T-shirt is a refusal to signal
    • False: Wearing an embarrassing T-shirt is a signal that you don't care about wearing embarrassing T-shirts
  • Signalling attempts to ensure honest communication:
    • Signalling is meant to be costly for liars
    • Driver's licenses - signal that you're qualified to drive
    • Job market - potential employees have to signal that they're qualified for the position that they're interviewing for
      • Job market signalling is costly for everyone, even people who are qualified
    • Clothing - signals personality
    • Doing nice things for friends
  • The world is full of people pouring wealth into things whose only purpose is to signal wealth
  • Is it really beneficial for society for people to see who is rich, who is poor, who is socially competent, who is smart, etc?
    • Yes
  • You don't serve society by failing to signal because signalling well is part of winning
  • Distinguish signalling "respectability" from working towards your cause
  • My thoughts
    • Katja misses an important part of signalling
    • Signalling has two components - your action and others' interpretation
    • Misses the fact that signalling is context dependent
      • A Prada handbag in one context can be a positive signal, but can be a negative signal in another context
    • Refusing to signal is itself a signal - shows that you have the resources to afford social opprobrium
      • It's called "fuck-you money" for a reason
    • Even effective charitable gifts have a signalling purpose - the Bill and Melinda Gates Foundation is named that for a reason
    • We should try to harness signalling, not get rid of it

There's No Fire Alarm For Artificial General Intelligence

  • The purpose of a fire alarm is to make it socially acceptable to acknowledge a fire
  • People will remain in smoky conditions if there is no alarm
  • People are bad at knowing what they believe, so they allow social pressure to override their better judgement
  • If AGI seems far away, is it even worth doing research into AGI alignment?
  • If we get 30 years' warning about aliens coming, we would start discussing what to do today - no one would advocate waiting until the aliens were six months away to start thinking about the problem
  • History shows that key technological developments seem far away until they happen
    • People on the cusp of powered flight were very uncertain about its feasibility
    • Nobel-prize winning physicists were uncertain about whether it would be possible to get energy from nuclear fission
    • Hindsight bias makes everything seem predictable
  • Progress is driven by peak knowledge, not average knowledge
    • If you're not in the field, you're going to be unaware or only dimly aware of how much the field has progressed
    • Don't assume that your impression of a field is where the field is - it takes years for knowledge to percolate out
    • Technological timelines are not easily foreseeable
  • The future has different tools and can easily do things considered difficult today and can do things with difficult that are considered impossible
    • We think that AGI is decades away for the following reasons
      • Don't know how to get AGI with present technology
      • Even getting the impressive results that we do have is really hard
      • Current AI systems are still really dumb in a lot of ways
    • In machine learning, once something is possible, it's only a short amount of time before it's easy
    • The experience of researchers on the cutting edge of AI blinds them to how solutions, once discovered, become widely available
  • Most of the discourse on AGI being far away isn't being driven by actual models
    • It's easy to be skeptical of AGI, but it's hard to name things that are impossible over even short timelines
    • The confidence that people have in AGI being far away isn't thought out
  • The signs of imminent AGI will be subtle and debatable
  • AlphaGo was a signal of AGI in a way that Deep Blue wasn't - AlphaGo used techniques that are more generalizable than Deep Blue's
  • Experts will only believe that AGI is imminent if
    • They personally can see how to build AGI with current tools
    • They personal jobs given them a sense that AGI is easy to build
    • When they are impressed with their AI being smart in a way that feels "magical"
    • However, once you have these things, you already have AGI and it's too late
  • We will never have the clear signal that tells us how many years out AGI is
  • The choice to delay action until a future alarm is reckless enough that it invokes the law of continued failure
    • Any civilization competent enough to deal with AGI once an alarm goes off is competent enough to not wait for the alarm in the first place
  • If we were serious about addressing AGI later, we'd be reviewing the state of the art in AI every six months to see if we're on some kind of threshold
  • My thoughts
    • Eliezer is assuming that the main problem is not talking about AGI before AGI arrives
    • But what if AGI doesn't show up?
    • He talks about all the times that people failed to predict technological advances, but ignores the predicted advances that didn't happen
      • Fusion power
      • Advances resulting from genetics and proteomics
      • Nanotechnology
    • It seems that Eliezer is assigning zero cost to "embarrassment" - doing a bunch of AGI alignment research that turns out to be premature is fine too
    • I'm not so sure that he's correct in this
    • Other things for which there won't be a fire alarm
      • Cascadia earthquake
      • Yellowstone supervolcano
      • Solar flares
    • What makes AGI special? It doesn't matter if I'm dead from something that kills humanity, kills North America, kills the United States, kills the Pacific Northwest, or kills me in particular. I'm dead either way.

The Order of the Soul

  • Higher cognitive functions have two modes
    • Bias world towards certain outcomes
    • Appreciation of the structural symmetry in the Universe
  • Bias and Symmetry
    • Bias: active force bending things to our will
      • Make territory fit map
      • Too much bias causes people to either freak out or delude themselves into thinking that everything's okay
    • Symmetry: fundamentally reflective tendency
      • Update map to fit territory
      • Too much symmetry leads to inaction - everything true is good; everything good is true
    • Bias is social settings helps to persuade people to see things your way ("reality distortion field")
    • Too much bias in social settings results in narcissism or borderline personality disorder
    • However we need some level of bias in order to have motivation to do things
    • Mindfulness meditation works by reducing our tendencies to bias and increasing our tendencies to appreciate symmetry
  • The order of the soul
    • The Bhagavad Gita talks about 3 tendencies that everyone has
      • Sattva - wisdom, harmony, purity
      • Rajas - activity, ambition
      • Tamas - ignorance, chaos
    • This tripartite model is there is a number of other philosophical traditions
    • Everyone agrees that the bottom is sensual appetites and the middle is self-assertion
    • However, Freud and his followers disagree with Plato and Bhagavad Gita on the top level
    • Freud sees the superego as the internalized voice of authority figures
    • Corresponds to Julian Jeynes' model of a bicameral mind
    • In contrast sattva or Plato's logos can be analogized to thinking at the meta-level
  • Good at school
    • Benjamin had trouble teaching someone goal factoring because it resembled the pointless goal-oriented exercises that one does at school
    • Schools have poisoned people's experiences with all sorts of subjects
    • This is because formal education has a curriculum and does not tolerate people's desires to go and learn topics that interest them
    • Marshmallow test is a test of the desire to pass tests, not of innate willpower
    • The primary thing that schools teach is obedience
  • Precepts and concepts
    • Most students in school aren't thinking conceptually
    • They're trying to figure out what's necessary to pass the test
    • But life isn't a test
  • My Thoughts
    • What a lot of words for such little content
    • Insight porn at its finest
    • Why is this a single blog post, instead of two or three - sections aren't well tied to one another
    • The Arendt example is just wrong - As Eichmann Before Jerusalem shows, Eichmann deliberately portrayed himself as colorless, faceless administrator because it was, in his estimation, his best chance at acquittal
      • Papers from Argentina show that Eichmann was the leader of the exiled Nazis
      • He, along with his peers, were actively trying to figure out a way to restore National Socialism
      • As late as the 1950s, Eichmann was writing justifications for why the Final Solution was both legally and morally justifiable, using contemporary examples like the conflict between the Israelis and the Palestinians
    • More generally, I don't think that there's enough rigor in schools, not that there's too much rigor
    • Personally, I was very unprepared for college level math and college level writing because my high school was too eager to let kids 'do things their own way'

The People In My Head Who Make Me Do Things

  • It can be helpful to cluster your motivations and assign a persona to each cluster
  • Recognize that each of your motivations has a role and a purpose
  • Might be helpful for you to be more explicit about giving different parts of yourself a chance to be at the forefront
  • My thoughts
    • This is mostly an example
    • Questionable generality

Guided By The Beauty of our Weapons

  • Tim Harford - The Problem With Facts
  • Argues that people are mostly impervious to facts and logic
  • Talks about the "backfire effect" - probably not true
  • Agnotology - the deliberate production of ignorance
    • Solution to agnotology is unconvincing
    • Both sides are equally capable of telling convincing stories
  • The subtext to Harford's article is that the ingroup acknowledges facts, but the outgroup doesn't
  • But there is no weird tribe of "fact immune troglodytes" out there
  • The focus on transmission is part of the problem - implies that people will be immediately be convinced once they hear the fact
  • The problem isn't that the outgroup is impervious to facts and logic, it's that there's no honest debate happening at all
  • What constitutes "honest debate"
    • Bilateral communication - two people are actually communicating with each other
    • Both people have chosen to enter
    • Spirit of mutual respect and truth-seeking
    • Outside of a high-pressure "point-scoring" environment
    • Single topic and try to stick to the topic at hand
  • Even then, it takes more than a single conversation to get people to change their minds
  • The closest thing Scott has seen to honest debate is cognitive psychotherapy
  • There's no single moment of blinding revelation
  • The SSC comments section has multiple examples of Trump supporters who said that the SSC post caused them to at least question their support of Trump
  • The problem with debate in the style described above is that it doesn't scale
  • Is there anything the media can do?
    • Treat disagreement as a need to collaborate to investigate the question further
    • Assume good faith of the other side
    • Engage in adversarial collaboration - join with someone from the other side to explore questions with a mutually agreed-upon methodology
  • Why should we engage in logical debate?
    • Logical debate is an asymmetric weapon - stronger in the hands of those who are on the side of truth
    • Rhetoric and violence are both symmetrical - whether they work is entirely dependent on who can tell the most compelling story or who can gather the most soldiers
    • Unless you use asymmetric weapons, the best you can hope for is chance - sometimes you have more persuasive stories or more soldiers and sometimes they do
  • Improving the quality of debate is a painful process
  • But, we don't have to go very far to be effective - only need to convince ~2% of people in order to flip elections
  • If you genuinely didn't believe facts didn't convince people, why are you even bothering to study facts instead of entering a state of pure Cartesian doubt?
  • Ultimately, the other side isn't that different from you; the same facts and logic that worked on you will work on them, given time
  • The only long term path to progress is raising the sanity waterline
  • My thoughts
    • People didn't reject fascism after reasoned and considered debate, they rejected it because of a little thing called World War 2
    • Assuming good faith assumes that the other side will consent to deal with you
      • There are powerful forces on both sides pushing for group solidarity instead of cross-group collaboration
    • Scott doesn't consider the problem that your own side might sabotage your efforts - remember, Yitzhak Rabin was shot by an Israeli nationalist and Gandhi was shot by a Hindu nationalist

2017-10-23 RRG Notes

Intellectual Progress Inside and Outside Academia

  • Initial Thread:
    • Wei Dai
      • Discussion is about what is preventing academia from recognizing certain steps in intellectual progress
        • Bitcoin
        • TDT/UDT
      • Non-academics came up with both of these things; why didn't academia get there first?
    • Eliezer Yudkowsky
      • Academic system doesn't promote "real work" getting done
        • And MIRI does?
      • Trying to get productive work done in academia means ignoring all the incentives in academia pointing against productive work
      • Academia isn't about knowledge
      • People who have trouble seeing the problem with academia are blinded by:
        • Inadequate fluency with Moloch
        • Status blindness
        • Assigning non-zero positive status to academia
      • Can we get academics to take us seriously?
        • OpenPhil hasn't been very successful at getting good research on AI alignment
      • The obvious strategy is to not subject yourself to academic incentives
        • This includes abandoning peer review
        • Does Eliezer understand how dangerous this is?
      • What is Eliezer's thing against math?
      • Mailing lists work better than journals
        • Do they? Other than the one thing that Scott Aaronson did, what important research has come out of mailing lists or blog posts?
  • Subthread 1:
    • Wei Dai
      • Academia has delivered deep and important results
        • Public Key Crypto
        • Zero Knowledge Proofs
        • Decision Theory
      • We need a theory that explains why academia has been able to do certain things but not others, or maybe why the situation has gotten worse
      • We should be worried that academia is not able to make progress on AI alignment
    • Qiaochu Yuan
      • Is it correct to speak of academia as a single entity?
    • Wei Dai
      • What distinguishes the parts of academia that are productive from the parts of academia that are not
      • Is the problem that academia is focusing on the wrong questions?
      • How can we get academia to focus on higher priority topics?
  • Subthread 2:
    • Eliezer Yudkowsky
      • Things have gotten worse in recent decades
        • Maybe if we had the researchers from the '40s, we'd do better
      • OpenPhil is better than most funding sources, but they don't "see past the pretend" (what does this mean)?
      • Most human institutions don't solve particularly hard mental problems
        • Except that the ostensible purpose of universities (especially research universities) is to work on hard mental problems. If they're failing at that, maybe it ought to be addressed
    • Rob Bensinger
      • It's not actually clear that researchers from the '40s would do better given current knowledge than the researchers of today
        • Progress in QM has proceeded similarly to progress in AI
        • Progress on nuclear science in the '30s progressed similarly to progress on AI today - it only accelerated after the government threw massive amounts of money at it
        • Speaking of AI itself, people were talking about AI alignment as a potential problem as far back as 1956 - if researchers from the 1940s and 1950s were better than researchers today, then one would expect at least some level of thought about AI alignment back then - this doesn't seem to have happened
    • Wei Dai
      • Maybe human brains and the standard scientific toolbox of the 20th century are just bad at philosophical issues
      • We see a slowdown in all fields because we're waiting on philosophical breakthroughs
      • AI happens to be more affected by this slowdown than other fields
      • Mailing lists and blogs have alleviated some of the communications issues, but making progress using mailing lists and blogs requires pulling together enough hobbyists to make a difference
    • Rob Bensinger
      • Prior to 1880 human inquiry was good at exploring nonstandard narratives, but bad at rigorously demanding testing and precision
      • Between 1880 and 1980 we solved the problem by requiring precision and testing, which allowed science to get a lot of low-hanging fruit really fast
      • But the problem with requiring precision and testing is that it prevents you from exploring "weird" problems at the edge of your conceptual boundaries
      • The process of synthesizing "explore weird nonstandard hypotheses" with "demand precision and rigor" is one that's progressing in fits and starts, with islands of good philosophy cropping up scattered across various fields
  • Subthread 3
    • Vladimir Slepnev
      • What do we think about Scott Aaronson's work on quantum computing?
      • Why isn't Nick Bostrom excited about TDT/UDT?
      • Academia has a tendency to go off in wrong directions, but its direction can be influenced with understanding and effort
    • Wei Dai
      • What are some examples of academia going off in the wrong direction and getting corrected by outsiders?
    • Vladimir Slepnev
      • Isn't it easier to influence the direction that academia goes in from the inside?
    • Maxim Kesin
      • The price of getting into academia at a level high enough to influence the direction of a field is very high
    • Wei Dai
      • There's a subset of the steps in each field that need to be done by outsiders or newcomers?
    • Vladimir Slepnev
      • Doesn't understand the hate against academia
    • Wei Dai
      • People on LessWrong understood UDT just fine - why can't academics understand it?
      • Maybe because it's wrong, or maybe it's incoherent?
      • Maybe the fact that academics can't understand it points to a flaw in how it's being formalized or communicated
    • Vladimir Slepnev
      • Academia hasn't accepted TDT/UDT because it hasn't been framed correctly
  • Subthread 4
    • Stuart Armstrong
      • The problem is both specialization and lack of urgency
      • People found Stuart Armstrong's paper about anthropics interesting, but not necessarily significant
      • Stuart Armstrong's "interruptible agents" paper was helpful to him to learn how to model things and to present ideas
      • MIRI doesn't tell people why they should care about why topics or results are significant
  • Subthread 5
    • Eliezer Yudkowsky
      • Most big organizations don't do science
      • Most big science organizations aren't doing science, they're performing rituals that look like science
  • Counterpoint: Can academia even do AI research (much less AI X-Risk research)?
    • Academia, in general, tends to be biased towards looking for new theoretical insights over practical gains
    • If there are no apparent theoretical benefits to be had, academia tends to move on to the next most promising approach
    • Remember, it wasn't academia that was responsible for DeepDream, or AlphaGo
    • Before Google demonstrated NNs with large data sets, NNs were being dismissed in favor of Support Vector Machines
    • Another counterpoint: Chaos Theory
      • In Gleick's Chaos we see that practitioners of the emerging field of chaos theory had to fight very hard to be taken seriously
      • Chaos Theory proponents had to fight hard to convince physics establishment that they weren't "mere engineering"

Research Debt

  • Achieving a research-level understanding of most topics is like climbing a mountain
  • This climb isn't progress, it's debt
  • The Debt
    • Poor exposition: no good explanations of an idea
    • Undigested ideas: most ideas start of hard to understand, and only become easier to understand with time and the development of analogies and language
    • Bad abstractions and notation: poor notation and abstractions can make it harder for newcomers to get up to speed
    • Noise: No way to tell which papers you should be looking and which ones you should dismiss
    • The problem with research debt is that everyone looks at it as normal
  • Interpretive labor
    • There is a tradeoff between the energy used to explain an idea and the energy used to understand it
    • Many-to-one communication, in the form of writing textbooks, giving lectures, etc, gives a multiplier to the cost of understanding, since each person has to understand the information individually, but the cost of explaining remains the same
    • In research, the cost of explaining remains the same as the group grows, but the cost of understanding increases as the amount of research increases - this leads to people specializing
    • Research debt is the accumulation of missing interpretive labor
  • Clear Thinking
    • In addition to interpretive labor, we should work on developing better abstractions and notations
    • I love that they're so glib about it. They don't seem to realize that developing better abstractions is so difficult, doing it got Feynman a Nobel Prize
  • Research distillation
    • Distillation combines deep scientific understanding, empathy and design to make research ideas understandable
    • Distillation requires as much (more) effort as coming up with the original discoveries
  • Where Are The Distillers
    • There are no incentives or support for anyone to do distillation
    • Distillation work isn't seen as "real research"
  • An Ecosystem for Distillation
    • 3 parts
      • Distill Journal - venue to give traditional validation to non-traditional contributions
      • Distill Prize - $10,000 prize to acknowledge outstanding explanations of machine learning
      • Distill Infrastructure - tools for making beautiful interactive essays
    • This is just a start - a lot more needs to be done

Rereading Kahneman's Thinking Fast and Slow

  • Thinking Fast and Slow is great, but it isn't perfect
  • Studies haven't held up in the replication crisis
  • "Hot hand" effect seems to be real
  • Organ donation rates
    • It's much more difficult to opt-out of organ donations in countries that have organ donation by default
    • Not a checkbox on the license form
    • Not really the same form of consent
  • Prospect theory seems to be as unrealistic as perfect rationality and the math is way more complicated

Devoodooifying Psychology

  • Voodoo death
    • People died after being cursed by witch doctors
    • Even if magic isn't real, if people believe it's real they'll waste away out of fear
    • The problem with voodoo death is that it seems plausible, but there isn't any evidence for it
    • Even if there's a phenomenon that can be identified as voodoo death, it's probably more complicated than people dying from the effects of their own mind
  • A lot of psychological phenomena look pretty voodoo
    • Placebo effect
      • Voodoo effect in reverse
      • Initially people were claiming all sorts of benefits to placebos
      • New studies show that placebo effect is weak and is mainly limited controlling pain
    • Stereotype threat
      • If people think that others think they'll do bad on a test, they perform worse on that test
      • Doesn't replicate well in large studies
    • Self-esteem
      • Popular in the mid-90s
      • Failed to replicate in later studies
    • Name preference effect - people do things that sound like their names
    • Unconscious social priming
      • People who heard the word "retirement" would walk more slowly
    • Artificial surveillance cues don't increase generosity - putting a pair of eyes doesn't improve people's willingness to donate or follow the honor system
    • Implicit association tests mostly don't work - people show implicit biases in IAT tests, and then act in an unbiased manner on other tests
    • Brainwashing is like hypnosis - it only works on people who are willing to be brainwashed to some extent
      • Moreover, most alleged cults had extremely high attrition rates - only those who found some level of fulfillment stuck with the alternative social lifestyle
  • Common thread in all of those examples - shift away from the power of the unconscious
  • Maybe our conception of the unconscious is overly broad
  • Maybe it's better to think of the unconscious like machinery
    • A car has a steering system, engine, brakes, etc, but none of those systems do anything without a driver
    • Perhaps unconscious mind is the same way - machinery that doesn't really do much of anything without the conscious
    • Of course, this model doesn't preclude the biases - it's possible for your steering to "pull" in a certain direction
    • This doesn't mean that the steering has volition of its own, but rather that it's misaligned so that the inputs your conscious mind is putting in aren't getting turned into the sort of outputs it desires

Learning To Love Scientific Consensus

  • Most scientific "mavericks" were either doubted for a short period of time or were part of moderate-sized dissenting movements
  • After a few years (between 10 and 30) their contributions were recognized
  • While scientific consensus maybe flawed, it doesn't ignore contrary evidence for long periods of time
  • Replication crisis
    • As it turns out, scientists actually take the replication crisis pretty seriously
    • Took about 10 years to go from something that only a few people were noticing to something that everyone was taking seriously
    • Rationalists were slightly ahead of the curve, but not that far ahead
  • Nutrition
    • Most nutrition scientists don't believe in the old paradigm of all calories being equal, and fat being really bad for you
    • If the old paradigm continues to be popular, it's because of inertia in the media and popular culture, combined with the fact that nutrition scientist haven't come up with a new paradigm to replace it
  • Social Justice
    • There have been meta-analyses showing that Implicit Association Tests aren't a good test for biased since 2009
    • Problems with stereotype threat have gotten coverage in mainstream media
    • While there are authors who are still arguing against gender differences, they're not considered to be part of the scientific consensus anymore
    • Even genetic psychological differences between population groups are part of the scientific consensus
      • Reference gwern's everything is heritable series
    • This is evidence that it's really difficult to politicize science
  • Nurture assumption and blank-slatism
    • It took about 10 years for people to realize that genetics confounds studies of developmental outcomes
    • Big study in the American Journal of Psychiatry that shows that child abuse does not cause cognitive disability
  • Intelligence Explosion and AI Risk
    • Many AI researchers take the notion of AI risk seriously
    • While the scientific consensus hasn't fully shifted in favor of AI X-Risk being a real problem, it certainly is no longer certain that it isn't a problem
  • IQ
    • 97% of expert psychologists and 85% of applied psychologists agree that IQ tests measure cognitive ability "reasonably well"
    • 77% of expert psychologists and 63% of applied psychologists agree that IQ tests are culture-neutral
    • Even where people disagree with IQ, their disagreements seem to be limited and well-reasoned
  • The pattern we see, where ideas get tried and discarded on roughly ten year cycles is part of the progress of science
  • Every time Scott has convinced himself that scientific consensus has been wrong, it's either him being wrong or him being a few years ahead of the curve
  • Scientific consensus has not only been accurate, it's been accurate to an almost unreasonable degree
  • That said, we shouldn't overly respect scientific consensus
    • The only reason scientific consensus ever changes is because people go look for evidence that is against the consensus, and then present it, causing the consensus to change
    • It's also really easy to be misinformed about what the consensus is

2017-10-16 RRG Notes

The Crash of Flight 90: Doomed by Self Deception?

  • Air Florida Flight 90 crashed into a bridge on takeoff from Washington National Airport, killing 78
  • Pilots failed to turn on engine anti-icing systems, causing loss of thrust
  • Pilots positioned plane behind another jet in a mistaken attempt to de-ice that may have made the icing problem worse
  • Engine pressure ratio indicator showed normal thrust, contradicting readings from other systems
  • Co-pilot pointed out discrepancy, but pilot proceeded with takeoff anyway, leading to crash
  • The accident was the result of a pattern of self-deception on the part of the captain, and insufficient forcefulness by the first-officer in correcting that self-deception
  • This can be seen in pilot's lack of response to the co-pilot's prompts
  • Pilot doesn't respond to co-pilot until the plane is already falling
  • My reactions:
    • To be honest, this article isn't very valuable
    • Lots of speculation and going off on tangents
    • Reading the NTSB Report indicates that while the pilot's decision to proceed with the takeoff in suboptimal conditions was the primary cause of the accident, there were other contributing factors:
      • Improper de-icing procedures
      • Improperly maintained de-icing equipment
      • Excessive delay between pushback from gate and takeoff clearance
      • Unstable aircraft design that was known to be vulnerable to ice-buildup on wingtips (indeed Boeing introduced a new anti-icing system for the 737 after this disaster)
      • No way to determine that Pt2 probes were blocked by ice
    • If you want a better example of crew-resource management issues, look at Korea Air 8509
    • This is one of my objections to nonviolent communication. Sometimes a degree of violence is warranted to get the other person to acknowledge that you're disagreeing with them
    • Other things to bring up at discussion:
      • Three-Mile Island failure:
        • Pressure-operated relief valve got stuck open
        • Venting steam led to false water level readings
        • Same inconsistencies in instruments; same confusion among crew, who persisted in trying to solve a problem that didn't exist

Competent Elites

  • Elites are elites for a reason
  • It's possible that the people who are richer than you are just better, in every way
  • This is so horrifying to contemplate that nobody talks about it
  • It's not in the interest of those who are in the power elite to talk to you until you also make it into the power elite
  • Eliezer is drawing conclusions off a biased sample
  • He doesn't realize just how biased his sample is - the majority of any profession do not attend seminars and professional development conferences
  • Finally, can we equate intelligence with competence, like Eliezer does?
  • Moreover, intelligence just seems to make the inevitable screw-ups worse - look at LTCM, for example, or the financial crisis
  • Self-made elites vs. elites who've inherited their position?

Inefficiencies in the "Social Value" Market

  • How to produce a lot of social value for relatively little cost?
  • It's possible to produce a lot value by expending lots of resources, but is there a way to get a lot of social value relatively efficiently?
  • Add liquidity where needed
    • Microloans or startup capital
    • Isn't the impact of microloans somewhat controversial, at this point?
  • Solve coordination problems
    • Kickstarter
    • Donor chains
  • Pool risks
    • Insurance
    • Hedges and contracts
  • Provide information to allow others to allocate their resource better
    • GlassDoor for companies
      • Though, GlassDoor is a poor example, because it falls into the Yelp problem - the only people who comment are those who either had very positive or very negative experiences. Moreover, GlassDoor has an inherent conflict of interest, because it gets money by recruiting for businesses.
    • GiveWell for charities
  • Restructure choice sets so that our biases work for us instead of against us
    • Richard Thaler won the Nobel Prize in economics for this
    • Use status quo bias to encourage organ donation
    • Use prize-linked savings accounts to encourage retirement savings
  • Remove rent-seeking
    • If it were that easy, it'd be done already
    • Get rid of unnecessary occupational licensing
  • Reduce transaction costs
  • My reaction
    • Overall, this is an okay article, but the "remove rent-seeking" bit was amusing
    • More to the point, I think this misses the fact that you will face active opposition in doing all of these these things, from incumbents and regulators, and therefore the benefit of doing these things has to be equal to overcoming active opposition

Humans are born irrational and that has made us better decision-makers

  • Being irrational is a good thing
  • Rationality only makes sense in the context of a goal
  • We don't have the time or capability to calculate probabilities and potential risks that come with every choice
  • Even data based decision making doesn't inoculate against irrationality or prejudice
    • Financial crisis
    • Experts were convinced that it was statistically impossible for the events of the 2017 financial crisis to occur
    • It's possible to convince yourself that you're being perfectly rational, when in fact you're overfitting to historical data
  • Moreover rationality ignores the fact that decisions rest as much on subjective preferences as they do on objective facts
  • People tend to rely on heuristics rather than statistics and this is a good thing
    • Recognition heuristic is as good at predicting the winners of Wimbledon than ATP rankings
    • Hyperbolic discounting is a good proxy for modeling uncertainties that may prevent us from getting a payout in the future
  • Emotions are key to decision-making
    • People who suffer damage to parts of the brain that are responsible for generating emotions find themselves unable to make decisions
    • Emotions let people know what their preferences are, which is important for decision-making
    • The marriage example is wrong. You would absolutely be a fool to go along with your emotions in that. People tend to be extremely good at fooling themselves when it comes to relationships, and often this self deception only wears away years (or even decades in) and leads to painful divorces
    • Courage can be seen as excessive optimism, but it leads to great achievements
      • Group selection vs. individual selection - just because it's good for the group to have some courageous members doesn't mean it's beneficial for each courageous individual to be courageous
  • Decisions are made in a social context
    • The social implications of a decision influence us as much as the actual consequences of the decision
    • Our ability to agree easily with those around us is key to our ability to collaborate

Why I am not a Quaker (even though it often seems as though I should be)

  • The Society of Friends (a.k.a Quakers) have come to the right conclusion on on a surprising number of things in a surprising number of domains
  • Their virtues are those of liberalism, as are their vices
  • Why should we respect the Society of Friends
    • Proto-liberals
      • Freedom of religion
      • Freedom of thought
      • Belief in individual liberty
      • Belief that people had an "inner-light" that served as a connection to God and a moral compass
    • Personal integrity as a radical practice
      • Quakers had integrity to a fault
      • Refused to sign letters with "normal" closing lines because they were considered to by insincere
      • Refused to quote high prices at the start of bidding, because this was seen as a dishonest bargaining tactic
      • Virtue over profit maximization
    • Nonviolent social technology
      • Quaker society relies on persuasion over force
      • Quaker society treats dissent as signal to be processed rather than noise to be suppressed
    • Humble Marketing
      • Quakers don't market themselves - they focus on providing high quality goods and services and trust that people will find their way to them
  • So, if Quakers are so cool, then why isn't Ben a Quaker?
    • Saying, "I wrote about this argument elsewhere, so I won't repeat it here," is a dick move comparable with, "The proof of this lemma is trivial and left as an exercise for the reader."
    • Quakerism is vulnerable to arbitrage
      • If you allocate more resources towards good works, then someone else can make money by allocating resources towards works that oppose your good works
      • If you volunteer to help people then you incentivize systems that rely on your volunteer efforts rather than systems that prevent the problems in the first place
      • Vulnerable to specious guarantees - Starbucks can't deceive you about the quality of their coffee, but it's much more difficult to verify their "fair trade" claims
    • Quakerism works well in a closed system (i.e. intentional communities or local production), but it tends to work less well in open systems where bad actors can exploit its vulnerabilities
    • Quakers don't pay enough attention to the problems of ensuring the continued existence of their community - don't have children and are eventually displaced by those who don't share their values
    • The problem that intentional communities have is that they're inherently going to conflict with the mainstream world, and that conflict will be an ongoing drain, unless they work towards economic autarky
      • Has he actually read any histories of intentional communities? The only people who've actually pulled off the economic autarky thing are the Amish. Pretty much every other form of intentional community dies.
  • Alternatives to Quaker values
    • Puritans
      • Don't exist don't any more
      • Weren't very fun
      • Very traditionalist - probably wouldn't be open to some of the more radical rationalist conclusions
    • Jews
      • Jews are drifting away from the practice of Judaism as they're forced to make compromises to remain compatible with a modern economy
      • Haredi Jews are much more insular, but they're also much more dogmatic, and don't seem to produce as much material progress
    • Academic communities - focus on the integrity of their intellectual production, but they're dependent on the outside world for economic inputs and new members
    • Hippie communities seem to have the hang of living well, but don't seem to produce much in the way of material progress
    • Burning Man
      • Only lasts two weeks
      • No artifacts of lasting value - leave no trace

2017-10-09 RRG Notes

Postmodernism for rationalists

  • Disclaimers
    • Just scratching the surface
    • A lot of generalizations
    • Inform us about the spirit of the phenomenon
  • Postmodernism
    • Misinterpreted term
    • Lots of baggage
    • Conglomeration of ideas
    • Collection of post-WW2 movements in art, architecture, philosophy, literature, and other aesthetic fields that was a reaction against the totality of pre-existing aesthetic phenomena
  • Postmodernist architecture
    • Contrast Bauhaus and postmodern
    • Bauhaus: very functional; no ornament, simple geometries, clean angles
    • Postmodern: return to ornament; stylistic fusion, formal fluidity
  • Postmodern art
    • Compare to renaissance art
    • Renaissance
      • Representational
      • Transcendental - art that goes beyond that yourself - art should be "timeless"
      • Painting and sculpture, mostly
      • Objective, fixed, determined meanings
    • Postmodern art
      • Temporal
      • Arbitrary
      • Transient
      • Form can be whatever you want it to be
      • Marcel Duchamp - urinal - forced a reconsideration of what constituted art
      • Rauschenberg - changing mold formations
      • Warhol - contingent
      • Postmodern art's m
  • Postmodern Literature
    • Classical literature
      • Man vs nature
      • Man vs Man
      • Man vs God
    • Modern litrature
      • Man vs Society
      • Man vs Self
      • Man vs No-god
      • Orwell
    • Postmodern literature
      • Man vs technology
      • Man vs reality
      • Man vs Author
      • David Foster Wallace
      • Pynchon
      • Break the tropes of modernist and classical literature
    • Postmodern literary criticism - "the author is dead"
      • Authorial intent is a fiction, because it's situated in history
      • You can't experience what Cervantes intended with Don Quixote because you live in a different part of history
      • Meaning is constructed by the reader, not the author
      • Meaning is dictated by the interaction between the words on the page and the reader
  • Postmodern Era
    • Classified as 1950 onward
    • After "modern" era
    • Modernity is defined as the "era of grand narratives" - widespread belief in the progress of man
      • Decline in religiosity
      • Progress is sciences and engineering
      • The problem with modernity is that advances in understanding the physical sciences didn't translate to advances in the social realm
      • "If we just have the facts, we can fix the world"
      • Donald Trump is arguably the first postmodern president
    • Rationalist is a quintessentially modernist phenomenon
    • We live in a world of competing narratives
    • No "big beliefs" that capture the mainstream
    • Postmodernism is decentralized
    • The big questions
      • What are facts
      • What is truth
      • Who gets to write history?
  • How did we get here
    • Nietzsche
    • What did Nietzsche mean when he said god is dead
    • What is god?
      • By the late enlightenment theologians had long since abandoned literalism
      • Nietzsche noticed that religiosity was declining
      • God is a functional meme whose effects have included
        • Community in-group lingua franca
        • Ethical norms
        • Genocide and oppression
      • Possible to adopt Christian ethics without beliefs in god
    • The loss of religion also meant the loss of these grand narratives
    • Every person has their own conception of what God is
    • God is dead?
      • When western religiosity declines
        • Nihilism
        • Loss of community solidarity - atomized society
        • Lack of sense of self - schizophrenic
        • Narcissism - seek identity via other means (other memes)
          • Corporate identities
          • Ersatz values
    • If we don't have God, how do we determine a purpose for our own lives; how should we behave?
    • If religion/god is dead, then something has to fill that vacuum
      • Moloch?
      • What is the deity of postmodernity?
    • Moloch and God are the same in epistemelogical terms
  • Capital is the deity of postmodernity
    • Big shift in marketing paradigms from modernity and postmodernity
    • Postmodernity - sell a "lifestyle"
    • Focus on creating desire, rather than fulfilling desires
    • Capitalism determines our desires
    • Fear, frustration and anger are the most viral emotions - drive the most shares, clicks, etc
  • Why is Capitalism the egregore of postmodernism
    • Postmodernity is defined by the lack of grand narratives
    • No solidarity
    • John F. Kennedy was the symbol late-modernity - assassination was a
    • What common values do we have left?
      • Money
      • Capital
      • The proliferation of capital for its own sake - "Greed is good"
      • We believe that more money == more happiness
  • For this new world, we need new philosophies
    • Foucault
      • Geneaology - traces the genealogy of ideas
      • What is power and what is its history
      • Concepts of episteme and discourse
    • Derrida
      • We're haunted by egregores
      • Deconstruction
      • Relates to Jung's collective unconscious
      • It is important to notice egregores in order to deal with them
    • Deleuze - "Heraclitus on LSD"
      • Go with the flow
      • Knowledge is structured more like a rhizome
      • Process, not theory
    • Baudrillard
      • Our world is the matrix, but taking the red pill doesn't get you out of it, just lets you know that you're in the matrix and prevents you from forgetting
      • History is reified images without referents
    • All French - "children of Nietzsche"
  • Is "new theory" a post-hoc rationalization
    • Maybe
    • Probably
    • But we need new models of interpretation is anachronistic
    • How does Hobbes explain Naziism?
  • Postmodernism is not a philosophy
    • Postmodernist philosophers don't necessarily agree with each other
    • Postmodernist philosophers hate being called postmodernists in the same way that rationalists hate being called a cult
    • Postmodernism is an extremely reductive terms
    • We call these philosophers "postmodern" because they came of age during postmodernity
    • Postmodern doesn't necessarily imply anything about the content of the philosopy
  • Postmodern epistemology
    • How can something be known?
    • what is our standard for "truth"
    • Empricism - truth is observable and testable
    • Rationalism - truth should be mentally deducible
    • "Woo" - astrology and horoscopes
    • Christianity - truth is God
    • Postmodernism - truth is whatever you want it to be (with a lot of qualifiers)
  • Postmodern meta-epistemology
    • Truth is dependent on context and frame of reference
    • Actions have consequences, but the moral value of actions and consequences is determined by us
    • How is truth determined in post-modernity
      • Fake news
      • Alternative facts
      • All of these are the result of people having different epistemologies
  • Deconstruction in action
    • Rationalists deconstruct all the time - find exceptions to norms and generalized statements
    • "If I kill someone, I'll go to jail"
      • Not always true
    • Find all the preconditions for truth be determined
  • What is Deconstruction
    • Uncover assumptions, presuppositions, and conditionals
    • Everything is a text - everything can be seen as a thing that has preconditions and interpretations
    • Look at the environment of a fact to determine how that fact came to be
  • Hermeneutics
    • The philosophy of interpretation
    • How we make meaning out of texts and events
    • Language of ethics
      • Virtue ethics, deontology, and Utilitarianism can be defined in terms of one another
    • Language is free play in mind-space, not externally referent
      • We can pretend that language refers to external entities, even though there is nothing forcing that reference
  • So what?
    • Why are we even bothering with this?
    • Marxist critique
    • Hermeneutics affect how we behave
    • Thus changing hermeneutics changes the world
  • Deconstruction now
    • Revolutionary philosophy has been weaponized
    • SJWs, alt-right weaponize pomo theory just like politicians weaponize physics
    • The average SJW understands their philosophy about as well as the average politician understands physics
    • It's important to study philosophy, because it gives you context
  • Why are rationalists interested in philosophy
    • Truth is a core rationalist value
    • Truth is core value in philosophy
    • Analytic philosophy - truth is logic
    • Continental tradition - truth is language
  • Logic is reducible to language and language is reducible to logic
    • Existence is a human context
    • Logic requires an observer
    • Would aliens discover the same laws that we do?
      • They would have to experience the world in much the same way that we do
  • Everything exists in a context, and recognition of that is the core idea of postmodern philosophy
  • Is any of this falsifiable
    • Asking the question means that you're "stuck" in empiricism in the same way that religious people are stuck with God
    • Is utilitarianism falsifiable
    • Is HPMoR falsifiable?
    • Is-ought fallacy
    • Humanities ask about the ethical
    • Sciences ask how the world is
  • Science, itself is an epistemological frame
    • However, science has a number of practical benefits
    • But even science is subject to political influences
  • Politics is the practice of power - looks more like /r/place than Congress
  • Metaphors We Live By - Lakoff
  • Miller's law - assume that what the other person is saying is true, and then try to imagine what it could be true of
  • Postmodernism at its best
    • Extrapolation of Nietzschean and perspectivism
    • Not dogmatic and ideological
    • Focuses on human values
    • Radical understanding of other subjects
    • Understand the limits of your worldview
    • Territory requires multiple maps
  • Postmodernism at its worst
    • Used to push shoddy political agendas
    • Cargo cult ideology
    • Used to rationalize and excuse asocial behavior
    • Neoliberal late-stage capitalism
    • Radical existential loneliness
  • Is postmodernism rational?
    • Postmodernism is only rational insofar as it helps you achieve your goals
  • Dark arts
    • Aren't necessarily worse
    • It is not for the rationalist to determine what is and is not "rational"
    • Do what works - rhetorical techniques aren't ethical or unethical in and of themselves
  • Just read Wittgenstein and Zhuangzi

2017-10-02 RRG Notes

Tolerate Tolerance

  • One of the likely characteristics of someone who is a rationalist is a lower than usual tolerance for flaws in reasoning
  • Even if we can't be nice to idiots, we should tolerate those who are being nice to idiots
  • Don't punish non-punishers
  • Only judge people for the mistakes they make, not the mistakes they tolerate others making

Your Price For Joining

  • People in the atheist/libertarian/technophile cluster set their joining prices too high
  • The group doesn't have to be perfect for you to join and make a positive difference by taking part
    • And this is why rationalists should get involved in politics
  • If the objection you have doesn't outweigh the net positive impact you would make on the world, swallow your objection and join the group anyway
  • People tend to underestimate group inertia
  • "In the age of the Internet and in the company of nonconformists, it does get a little tiring reading the 451st public email from someone saying that the Common Project isn't worth their resources until the website has a sans-serif font."
  • Make sure that your objection is your true rejection
  • "If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile."

Can Humanism Match Religion's Output

  • Is it possible to have a group rationalists as motivated and as coordinated as the Catholic Church?
  • If mental energy is limited, then it may be the case that some false beliefs are more strongly motivating than any true belief
  • Can we make rationalists match the real de-facto output of believing Catholics
  • Use cognitive behavioral therapy and Zen meditation to enhance motivation and reduce akrasia
  • If rationalists were co-located, they'd accomplish more, just through having the motivation of being an group
    • Empirically, that has not been true, at least, so far as I can tell from the output of the Bay Area Rationalists
  • Have regular meetings of people contributing to the same task, for the purpose of motivation, rather than coordination
  • Have a group norm of being applauded for caring strongly about something
  • If rationalists can combine even half the motivation that religious people have with better targeting for more efficient causes, then it's possible that rationalists can have an even greater impact on the world than the Catholic Church

Church vs. Taskforce

  • How can we fill the emotional gap, once religion is no longer an option?
  • Most of the things that fulfill our desire for community are not organized explicitly to give us community, and thus, don't optimally fulfill our need for community
  • Church, for example
    • Getting up early on a Sunday
    • Wearing formal clothes
    • Listening to the same person give sermons
    • Cost of supporting a church and a pastor
    • Medieval morality
    • I'm not sure that all of those are actually suboptimal
  • Is it possible to have a community that's just a community, with no other purpose?
  • Maybe the rationalist community model should be more focused on task forces rather than communities
  • Communities should be organized around common purposes that bind them together
  • Let's have a real higher purpose, instead of the illusory ones that religions offer
  • The problem is, what happens after that purpose has been fulfilled? NASA was a damn good organization... right up until it fulfilled its purpose of landing a man on the moon and returning him safely to Earth. The nice thing about religion's higher purposes is that they can't ever be acheived

Rationality: Common Interest of Many Causes

  • The purpose of Less Wrong is to create more rationalists
  • However, more rationalists is just an means
  • The end is to have more support for causes that would benefit from more rationalists existing
  • All of the causes that benefit from increased rationality should work to increase the number of rationalists
  • Your cause won't benefit 100% from the work you do to increase rationality, but in exchange you'll pick up a bit of a benefit when someone else also works to increase the number of rationalists
  • Instead of positioning your cause as the best thing, you should position it as a good thing
    • Going from "good" to "best" doesn't increase motivation substantially, but it does increase the burden of proof
  • Instead of trying to figure out the best project, we should a have a portfolio of "good" projects that can collaborate to increase the number of rationalists, all benefiting each other

2017-09-25 RRG Notes

A Sense That More Is Possible

  • Why should anyone be motivated to learn rationality?
  • Rationalists don't seem to be any more happy or successful than non-rationalists
  • There ought to be a discipline of cognition that that makes its students visibly more competent and formidable
  • But we don't see that in the real world
  • We haven't gotten together and systematized our skills
  • How do you systematically test rationality programs and verify that they're making people more rational?
  • Why aren't rationalists surrounded by a "visible aura of formidability"?
    • Less systematic training
    • More difficult to verify that you're being rational
    • It's a lot easier to convince people of the benefits of greater physical strength than it is to convince people of the benefits of greater mental strength
  • People lack the sense that rationality is something that should be systematized and trained like a martial art
  • Counterpoint

Epistemic Viciousness

  • The epistemic rigor of martial arts declined severely once fights became highly constrained by rules
  • How does epistemic viciousness arise?
    • The art is seen as sacred
    • People become emotionally invested in certain techniques
    • Incoming students have no choice but to trust the teacher
    • Excessive deference to historical masters (old techniques cannot be beaten, only rediscovered)
    • Inability to test methods discourages training

Schools Proliferating Without Evidence

  • Robyn Dawes
    • Judgement Under Uncertainty
    • Rational Choice in an Uncertain World
    • House of Cards: Psychology and Psychotherapy Based On Myth
  • Rorschach ink blot tests don't reveal anything about the patient
  • No statistical difference between different types of psychotherapy
  • The entire benefit of psychotherapy appears to come from just talking to someone
  • Yet, there are many different traditions (schools) of psychotherapy
    • The proliferation occurs even though there is a dearth of experimental evidence
    • The way to gain prestige was to devise a new technique and tell good stories about why that technique should work
  • If you're going to create an organized practice of anything, you need a way to tell how good you're doing that corresponds to something measurable and replicable

Three Levels of Rationality Verification

  • There is a possible art of rationality
    • Attaining a map that reflects the territory
    • Directing the future into states that you prefer
  • It is possible to be more rational than any current practitioner of rationality
  • But how do we verify our ideas on how to improve?
  • 3 levels of usefulness
    • Reputational
      • Ground reputations in realistic trials based on something other than social reputation and good stories
    • Experimental
      • Create statistical measures that are replicable and rigorous
    • Organizational
      • Create measures that are difficult to game

Why Our Kind Cannot Cooperate

  • Why is it that flying saucer cults are better at coordinating than rationalists?
  • Flying saucer cults can use emotional manipulation and a variety of other tactics to ensure cooperation
  • Unfortunately, this means that it's very easy for them to become unmoored from reality
  • Is there a way to get rationalists to cooperate effectively while maintaining an accurate view of the world?
  • This lack of cooperation highlights one of the problems in the current rationality knowledgebase - emphasis on individual rationality rather than group rationality
  • Tolerating only disagreement is as irrational as tolerating only agreement
    • Do any rationalist communities only tolerate disagreement? Yes, there's the normal Internet discussion thing where you examine ideas and try to poke holes in them, but rationalist communities I've been part of have been actually among the most quick to agree that an idea is good, if it's actually seen as good by members of the community
    • I think that the problems of coordination have much more to do with the unreliability of the rationality community than they do with our inherent social norm against cooperation
  • If you're doing worse with more knowledge, it's a sign that you haven't fully internalized the lessons of rationality
  • We need to be more okay with strong emotions
  • We need to be more okay with the notion that some things are worth sacrificing for