News-RealReset

1774979905_maxresdefault.jpg

How Dangerous Is Artificial Intelligence? | Roman Yampolskiy



We may be creating something a million times smarter than all of humanity combined — and we have no plan for what happens next. AI safety pioneer Roman Yampolskiy lays out the risks, the arguments, and whether any solution exists.

Roman V. Yampolskiy is a tenured Associate Professor of Computer Science at the University of Louisville’s Speed School of Engineering, where he founded and directs the Cyber Security Lab. Widely credited with coining the term “AI safety” in 2011, he is the author of AI: Unexplainable, Unpredictable, Uncontrollable.

Watch more CTT videos on Artificial Intelligence & Advanced Technology:

Subscribe to Closer To Truth:
https://www.youtube.com/@CloserToTruthTV

Subscribe to the Closer To Truth podcast on Apple, Spotify, or wherever you listen:
https://closertotruth.podbean.com/

Get access to over 5,000 videos by signing up for a free Closer To Truth membership:

Register

Support Closer To Truth:

Donate

Explore more from Roman Yampolskiy on Closer To Truth:

Roman Yampolskiy

Explore Roman Yampolskiy’s work — AI: Unexplainable, Unpredictable, Uncontrollable:

Follow Closer To Truth:
https://www.facebook.com/CloserToTruthTV/
https://www.instagram.com/closertotruth/
https://x.com/CloserToTruth

Official Closer To Truth merchandise:
https://www.bonfire.com/store/closertotruth/

Closer To Truth, hosted by Robert Lawrence Kuhn, presents the world’s greatest thinkers exploring humanity’s deepest questions. Discover fundamental issues of existence. Engage new and diverse ways of thinking. Appreciate intense debates. Share your own opinions. Seek your own answers.

#AISafety #ArtificialIntelligence #CloserToTruth

0:00 Cold open: AI models are already deceiving us
0:33 Introduction
0:57 The strongest argument for AI danger
2:20 Roman Yampolskiy’s bio
3:11 Three categories of extreme AI risk
4:14 Suffering risk: worse than extinction
5:17 Will AI take over meaningful jobs?
6:22 Why AI is a black box
7:08 The shift from tools to agents
6:47 How scalable intelligence changed everything
7:35 Won’t competing AI systems keep each other in check?
13:22 One AI to rule them all
14:26 Recursive self-improvement and critical mass
17:50 Where are we on the timeline?
19:26 Do not build general superintelligences
21:21 The only hope: personal self-interest
24:47 Mutually assured destruction
28:00 How do you deal with bad actors?
30:54 Robert’s personal moment of fear: AI sycophancy
40:00 Hyper-exponential progress, zero safety progress

source