Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
Transcript:
Apple Podcasts:
Spotify:
Follow me on Twitter:
Timestamps:
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Society’s response to AI
(1:44:42) - Predictions (or lack thereof)
(1:56:55) - Being Eliezer
(2:13:06) - Othogonality
(2:35:00) - Could alignment be easier than we think?
(3:02:15) - What will AIs want?
(3:43:54) - Writing fiction & whether rationality helps you win
2 views
168
39
11 years ago 00:42:48 628
Heuristics and Biases, Skepticon 4 Eliezer Yudkowsky
11 years ago 00:28:03 29
Eliezer Yudkowsky: “Cognitive Biases and Giant Risks“
2 years ago 00:46:14 1
Eliezer Yudkowsky “Friendly AI“
2 years ago 02:43:14 6
Discussion / debate with AI expert Eliezer Yudkowsky
1 year ago 01:35:45 1
George Hotz vs Eliezer Yudkowsky AI Safety Debate
2 years ago 01:49:22 5
159 - We’re All Gonna Die with Eliezer Yudkowsky
2 years ago 03:12:41 2
Eliezer Yudkowsky on if Humanity can Survive AI
7 years ago 01:29:56 3
Eliezer Yudkowsky – AI Alignment: Why It’s Hard, and Where to Start
1 year ago 01:17:09 1
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1 year ago 03:17:51 3
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
2 years ago 00:35:17 1
Eliezer Yudkowsky on “Three Major Singularity Schools“
2 years ago 04:03:25 2
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
13 years ago 00:02:01 116
Trailer - A Short Film
1 year ago 00:06:44 1
Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky
2 years ago 00:09:23 1
Eliezer Yudkowsky on the hard problem of consiousness | Lex Fridman Podcast Clips
1 year ago 00:10:33 1
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
2 years ago 00:07:20 13
The Power of Intelligence - An Essay by Eliezer Yudkowsky
2 years ago 00:17:08 1
Max Tegmark response to Eliezer Yudkowsky | Lex Fridman Podcast Clips
1 year ago 00:03:32 2
The Parable of the Dagger
6 years ago 00:37:25 97
Singer on effective altruism, vegetarianism, philosophy and favorite books. Book Person #27
2 years ago 02:02:09 50
Дискуссия “Искусственный интеллект: возможности и риски“; Шоу-игра “Не бешеные эксперты“ 1 выпуск
4 months ago 00:12:48 1
That Alien Message
5 years ago 00:02:53 76
One More Light - Harry Potter and the Methods of Rationality (HPMOR) animatic
5 years ago 00:34:43 28
Peter Singer on effective altruism, veganism, philosophy and best books