2017-10-23 RRG Notes
- Initial Thread:
- Wei Dai
- Discussion is about what is preventing academia from recognizing certain steps in intellectual progress
- Non-academics came up with both of these things; why didn't academia get there first?
- Eliezer Yudkowsky
- Academic system doesn't promote "real work" getting done
- Trying to get productive work done in academia means ignoring all the incentives in academia pointing against productive work
- Academia isn't about knowledge
- People who have trouble seeing the problem with academia are blinded by:
- Inadequate fluency with Moloch
- Status blindness
- Assigning non-zero positive status to academia
- Can we get academics to take us seriously?
- OpenPhil hasn't been very successful at getting good research on AI alignment
- The obvious strategy is to not subject yourself to academic incentives
- This includes abandoning peer review
- Does Eliezer understand how dangerous this is?
- What is Eliezer's thing against math?
- Mailing lists work better than journals
- Do they? Other than the one thing that Scott Aaronson did, what important research has come out of mailing lists or blog posts?
- Subthread 1:
- Wei Dai
- Academia has delivered deep and important results
- Public Key Crypto
- Zero Knowledge Proofs
- Decision Theory
- We need a theory that explains why academia has been able to do certain things but not others, or maybe why the situation has gotten worse
- We should be worried that academia is not able to make progress on AI alignment
- Qiaochu Yuan
- Is it correct to speak of academia as a single entity?
- Wei Dai
- What distinguishes the parts of academia that are productive from the parts of academia that are not
- Is the problem that academia is focusing on the wrong questions?
- How can we get academia to focus on higher priority topics?
- Subthread 2:
- Eliezer Yudkowsky
- Things have gotten worse in recent decades
- Maybe if we had the researchers from the '40s, we'd do better
- OpenPhil is better than most funding sources, but they don't "see past the pretend" (what does this mean)?
- Most human institutions don't solve particularly hard mental problems
- Except that the ostensible purpose of universities (especially research universities) is to work on hard mental problems. If they're failing at that, maybe it ought to be addressed
- Rob Bensinger
- It's not actually clear that researchers from the '40s would do better given current knowledge than the researchers of today
- Progress in QM has proceeded similarly to progress in AI
- Progress on nuclear science in the '30s progressed similarly to progress on AI today - it only accelerated after the government threw massive amounts of money at it
- Speaking of AI itself, people were talking about AI alignment as a potential problem as far back as 1956 - if researchers from the 1940s and 1950s were better than researchers today, then one would expect at least some level of thought about AI alignment back then - this doesn't seem to have happened
- Wei Dai
- Maybe human brains and the standard scientific toolbox of the 20th century are just bad at philosophical issues
- We see a slowdown in all fields because we're waiting on philosophical breakthroughs
- AI happens to be more affected by this slowdown than other fields
- Mailing lists and blogs have alleviated some of the communications issues, but making progress using mailing lists and blogs requires pulling together enough hobbyists to make a difference
- Rob Bensinger
- Prior to 1880 human inquiry was good at exploring nonstandard narratives, but bad at rigorously demanding testing and precision
- Between 1880 and 1980 we solved the problem by requiring precision and testing, which allowed science to get a lot of low-hanging fruit really fast
- But the problem with requiring precision and testing is that it prevents you from exploring "weird" problems at the edge of your conceptual boundaries
- The process of synthesizing "explore weird nonstandard hypotheses" with "demand precision and rigor" is one that's progressing in fits and starts, with islands of good philosophy cropping up scattered across various fields
- Subthread 3
- Vladimir Slepnev
- What do we think about Scott Aaronson's work on quantum computing?
- Why isn't Nick Bostrom excited about TDT/UDT?
- Academia has a tendency to go off in wrong directions, but its direction can be influenced with understanding and effort
- Wei Dai
- What are some examples of academia going off in the wrong direction and getting corrected by outsiders?
- Vladimir Slepnev
- Isn't it easier to influence the direction that academia goes in from the inside?
- Maxim Kesin
- The price of getting into academia at a level high enough to influence the direction of a field is very high
- Wei Dai
- There's a subset of the steps in each field that need to be done by outsiders or newcomers?
- Vladimir Slepnev
- Doesn't understand the hate against academia
- Wei Dai
- People on LessWrong understood UDT just fine - why can't academics understand it?
- Maybe because it's wrong, or maybe it's incoherent?
- Maybe the fact that academics can't understand it points to a flaw in how it's being formalized or communicated
- Vladimir Slepnev
- Academia hasn't accepted TDT/UDT because it hasn't been framed correctly
- Subthread 4
- Stuart Armstrong
- The problem is both specialization and lack of urgency
- People found Stuart Armstrong's paper about anthropics interesting, but not necessarily significant
- Stuart Armstrong's "interruptible agents" paper was helpful to him to learn how to model things and to present ideas
- MIRI doesn't tell people why they should care about why topics or results are significant
- Subthread 5
- Eliezer Yudkowsky
- Most big organizations don't do science
- Most big science organizations aren't doing science, they're performing rituals that look like science
- Counterpoint: Can academia even do AI research (much less AI X-Risk research)?
- Academia, in general, tends to be biased towards looking for new theoretical insights over practical gains
- If there are no apparent theoretical benefits to be had, academia tends to move on to the next most promising approach
- Remember, it wasn't academia that was responsible for DeepDream, or AlphaGo
- Before Google demonstrated NNs with large data sets, NNs were being dismissed in favor of Support Vector Machines
- Another counterpoint: Chaos Theory
- In Gleick's Chaos we see that practitioners of the emerging field of chaos theory had to fight very hard to be taken seriously
- Chaos Theory proponents had to fight hard to convince physics establishment that they weren't "mere engineering"
- Achieving a research-level understanding of most topics is like climbing a mountain
- This climb isn't progress, it's debt
- The Debt
- Poor exposition: no good explanations of an idea
- Undigested ideas: most ideas start of hard to understand, and only become easier to understand with time and the development of analogies and language
- Bad abstractions and notation: poor notation and abstractions can make it harder for newcomers to get up to speed
- Noise: No way to tell which papers you should be looking and which ones you should dismiss
- The problem with research debt is that everyone looks at it as normal
- Interpretive labor
- There is a tradeoff between the energy used to explain an idea and the energy used to understand it
- Many-to-one communication, in the form of writing textbooks, giving lectures, etc, gives a multiplier to the cost of understanding, since each person has to understand the information individually, but the cost of explaining remains the same
- In research, the cost of explaining remains the same as the group grows, but the cost of understanding increases as the amount of research increases - this leads to people specializing
- Research debt is the accumulation of missing interpretive labor
- Clear Thinking
- In addition to interpretive labor, we should work on developing better abstractions and notations
- I love that they're so glib about it. They don't seem to realize that developing better abstractions is so difficult, doing it got Feynman a Nobel Prize
- Research distillation
- Distillation combines deep scientific understanding, empathy and design to make research ideas understandable
- Distillation requires as much (more) effort as coming up with the original discoveries
- Where Are The Distillers
- There are no incentives or support for anyone to do distillation
- Distillation work isn't seen as "real research"
- An Ecosystem for Distillation
- 3 parts
- Distill Journal - venue to give traditional validation to non-traditional contributions
- Distill Prize - $10,000 prize to acknowledge outstanding explanations of machine learning
- Distill Infrastructure - tools for making beautiful interactive essays
- This is just a start - a lot more needs to be done
- Thinking Fast and Slow is great, but it isn't perfect
- Studies haven't held up in the replication crisis
- "Hot hand" effect seems to be real
- Organ donation rates
- It's much more difficult to opt-out of organ donations in countries that have organ donation by default
- Not a checkbox on the license form
- Not really the same form of consent
- Prospect theory seems to be as unrealistic as perfect rationality and the math is way more complicated
- Voodoo death
- People died after being cursed by witch doctors
- Even if magic isn't real, if people believe it's real they'll waste away out of fear
- The problem with voodoo death is that it seems plausible, but there isn't any evidence for it
- Even if there's a phenomenon that can be identified as voodoo death, it's probably more complicated than people dying from the effects of their own mind
- A lot of psychological phenomena look pretty voodoo
- Placebo effect
- Voodoo effect in reverse
- Initially people were claiming all sorts of benefits to placebos
- New studies show that placebo effect is weak and is mainly limited controlling pain
- Stereotype threat
- If people think that others think they'll do bad on a test, they perform worse on that test
- Doesn't replicate well in large studies
- Self-esteem
- Popular in the mid-90s
- Failed to replicate in later studies
- Name preference effect - people do things that sound like their names
- Unconscious social priming
- People who heard the word "retirement" would walk more slowly
- Artificial surveillance cues don't increase generosity - putting a pair of eyes doesn't improve people's willingness to donate or follow the honor system
- Implicit association tests mostly don't work - people show implicit biases in IAT tests, and then act in an unbiased manner on other tests
- Brainwashing is like hypnosis - it only works on people who are willing to be brainwashed to some extent
- Moreover, most alleged cults had extremely high attrition rates - only those who found some level of fulfillment stuck with the alternative social lifestyle
- Common thread in all of those examples - shift away from the power of the unconscious
- Maybe our conception of the unconscious is overly broad
- Maybe it's better to think of the unconscious like machinery
- A car has a steering system, engine, brakes, etc, but none of those systems do anything without a driver
- Perhaps unconscious mind is the same way - machinery that doesn't really do much of anything without the conscious
- Of course, this model doesn't preclude the biases - it's possible for your steering to "pull" in a certain direction
- This doesn't mean that the steering has volition of its own, but rather that it's misaligned so that the inputs your conscious mind is putting in aren't getting turned into the sort of outputs it desires
- Most scientific "mavericks" were either doubted for a short period of time or were part of moderate-sized dissenting movements
- After a few years (between 10 and 30) their contributions were recognized
- While scientific consensus maybe flawed, it doesn't ignore contrary evidence for long periods of time
- Replication crisis
- As it turns out, scientists actually take the replication crisis pretty seriously
- Took about 10 years to go from something that only a few people were noticing to something that everyone was taking seriously
- Rationalists were slightly ahead of the curve, but not that far ahead
- Nutrition
- Most nutrition scientists don't believe in the old paradigm of all calories being equal, and fat being really bad for you
- If the old paradigm continues to be popular, it's because of inertia in the media and popular culture, combined with the fact that nutrition scientist haven't come up with a new paradigm to replace it
- Social Justice
- There have been meta-analyses showing that Implicit Association Tests aren't a good test for biased since 2009
- Problems with stereotype threat have gotten coverage in mainstream media
- While there are authors who are still arguing against gender differences, they're not considered to be part of the scientific consensus anymore
- Even genetic psychological differences between population groups are part of the scientific consensus
- Reference gwern's everything is heritable series
- This is evidence that it's really difficult to politicize science
- Nurture assumption and blank-slatism
- It took about 10 years for people to realize that genetics confounds studies of developmental outcomes
- Big study in the American Journal of Psychiatry that shows that child abuse does not cause cognitive disability
- Intelligence Explosion and AI Risk
- Many AI researchers take the notion of AI risk seriously
- While the scientific consensus hasn't fully shifted in favor of AI X-Risk being a real problem, it certainly is no longer certain that it isn't a problem
- IQ
- 97% of expert psychologists and 85% of applied psychologists agree that IQ tests measure cognitive ability "reasonably well"
- 77% of expert psychologists and 63% of applied psychologists agree that IQ tests are culture-neutral
- Even where people disagree with IQ, their disagreements seem to be limited and well-reasoned
- The pattern we see, where ideas get tried and discarded on roughly ten year cycles is part of the progress of science
- Every time Scott has convinced himself that scientific consensus has been wrong, it's either him being wrong or him being a few years ahead of the curve
- Scientific consensus has not only been accurate, it's been accurate to an almost unreasonable degree
- That said, we shouldn't overly respect scientific consensus
- The only reason scientific consensus ever changes is because people go look for evidence that is against the consensus, and then present it, causing the consensus to change
- It's also really easy to be misinformed about what the consensus is