Gå offline med appen Player FM !
Robert Long–Artificial Sentience
Manage episode 339190834 series 2966339
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.
Youtube: https://youtu.be/K34AwhoQhb8
Transcript: https://theinsideview.ai/roblong
Host: https://twitter.com/MichaelTrazzi
Robert: https://twitter.com/rgblong
Robert's blog: https://experiencemachines.substack.com
OUTLINE
(00:00:00) Intro
(00:01:11) The LaMDA Controversy
(00:07:06) Defining AGI And Consciousness
(00:10:30) The Slightly Conscious Tweet
(00:13:16) Could Large Language Models Become Conscious?
(00:18:03) Blake Lemoine Does Not Negotiate With Terrorists
(00:25:58) Could We Actually Test Artificial Consciousness?
(00:29:33) From Metaphysics To Illusionism
(00:35:30) How We Could Decide On The Moral Patienthood Of Language Models
(00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory
(00:49:46) Have You Tried DMT?
(00:51:13) Is Valence Just The Reward in Reinforcement Learning?
(00:54:26) Are Pain And Pleasure Symetrical?
(01:04:25) From Charismatic AI Systems to Artificial Sentience
(01:15:07) Sharing The World With Digital Minds
(01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience
(01:39:48) Why Moral Personhood Could Require Memory
(01:42:41) Last thoughts And Further Readings
55 episoder
Manage episode 339190834 series 2966339
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.
Youtube: https://youtu.be/K34AwhoQhb8
Transcript: https://theinsideview.ai/roblong
Host: https://twitter.com/MichaelTrazzi
Robert: https://twitter.com/rgblong
Robert's blog: https://experiencemachines.substack.com
OUTLINE
(00:00:00) Intro
(00:01:11) The LaMDA Controversy
(00:07:06) Defining AGI And Consciousness
(00:10:30) The Slightly Conscious Tweet
(00:13:16) Could Large Language Models Become Conscious?
(00:18:03) Blake Lemoine Does Not Negotiate With Terrorists
(00:25:58) Could We Actually Test Artificial Consciousness?
(00:29:33) From Metaphysics To Illusionism
(00:35:30) How We Could Decide On The Moral Patienthood Of Language Models
(00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory
(00:49:46) Have You Tried DMT?
(00:51:13) Is Valence Just The Reward in Reinforcement Learning?
(00:54:26) Are Pain And Pleasure Symetrical?
(01:04:25) From Charismatic AI Systems to Artificial Sentience
(01:15:07) Sharing The World With Digital Minds
(01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience
(01:39:48) Why Moral Personhood Could Require Memory
(01:42:41) Last thoughts And Further Readings
55 episoder
Wszystkie odcinki
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.