Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI.
EPISODE LINKS:
Eliezer’s Twitter:
LessWrong Blog:
Eliezer’s Blog page:
Books and resources mentioned:
1. AGI Ruin (blog post):
2. Adaptation and Natural Selection:
OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love