Think Forward: Conversations with Futurists, Innovators and Big Thinkers
Welcome to the Think Forward podcast where we have conversations with futurists, innovators and big thinkers about what lies ahead. We explore emerging trends on the horizon and what it means to be a futurist.
Think Forward: Conversations with Futurists, Innovators and Big Thinkers
Think Forward EP 145 - The Mind at the End of the Universe with Richard Yonck
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
You can feel the ground shifting under AI right now, but the hardest question isn’t “When do we get AGI?” It’s this: if we can’t prove whether something is conscious, how would we ever know when a machine deserves rights and protection? That single uncertainty turns today’s AI excitement into a genuine ethical dilemma with real stakes for business, policy, and daily life.
We follow futurist and author Richard Yonck from a childhood obsession with technology into the real craft of forecasting, where methods matter more than titles. We pressure-test AGI hype, unpack why LLMs feel human, and confront the ethical trap of building minds we cannot prove are conscious.
• Richard’s path from science writing to futures research and consulting
• How the futurist label evolved from think tanks to LinkedIn noise
• What corporate foresight work reveals about prior knowledge and hidden politics
• Big history and the pattern of rising complexity against entropy
• Why “AGI” is a misleading idea and why human intelligence is not “general”
• Why generative AI and LLMs are powerful tools but limited
• The hard problem of consciousness and what AI personhood could mean
• Ethical risks from digital feudalism to digital serfdom
• Future of work questions including automation, meaning, and UBI
• Why education must shift toward lifelong learning and rapid upskilling
• Hybrid intelligence as the realistic near-term model
• How writing techno-thrillers can explore futures without selling hype
You can reach Richard and find out about his books and more at RichardYonk.com.
You can find us on all the major podcast platforms at www.thinkforward.com
ORDER SUPERSHIFTS! bit.ly/supershifts
🎧 Listen Now On:
Apple Podcasts: https://podcasts.apple.com/us/podcast/think-forward-conversations-with-futurists-innovators-and-big-thinkers/id1736144515
Spotify: https://open.spotify.com/show/0IOn8PZCMMC04uixlATqoO
Web: https://thinkforward.buzzsprout.com/
Thank you for joining me on this ongoing journey into the future. Until next time, stay curious, and always think forward.
The Unprovable Question Of Consciousness
SPEAKER_02If you can't prove something is conscious or not, will we ever know for sure? But should we ever get to that stage? Then we've got all of these ethical conundrums. You know, basically, are we creating our overlords or are we creating slaves? There is no almost no in-between.
SPEAKER_00Richard, welcome to Think Forward.
SPEAKER_02Great to be here, Steven. Really uh appreciate your podcast and uh looking forward to our conversation.
From Tech Obsession To Futures Work
SPEAKER_01Me as well. It's uh great to have you on. You've uh you're quite a prolific author. We share the passion for writing. Oh, you've written uh Future Minds, people might know you from Big Bang, To the End of the Universe, Heart of the Machine. So you just did this really cool techno thriller. I definitely want to get into Mindstock. And uh, you know, you're you're kind of uh two continents, Seattle and Prague. And uh we want to dive into that, but the journey into futures, I really want to kind of look at like how you discover this field. Like what how do things keep finding you? Like talk about your journey here, like how you you know, where you are now. Sure, sure. Well, how long have you got? It's long form, it's long form. So we have you know, as much as you need. Mostly just much as the listener, less as the listener will uh you know, hang hang in with us. I do not want to put anybody to sleep.
How The Futurist Field Changed
SPEAKER_02So basically, my passion for technology, the history of technology, uh, inventors goes almost as far back as I could uh begin reading. As a as a young boy, I you could find me in the libraries, just pouring through all of that. Uh I just was fascinated with technology, how it worked, how people figured it out. And uh so that led me in a lot of different directions. And I did end up having a passion for a broad range of sciences. The that said, growing up, everything I was hearing, and you focused on a field, you got became a specialist. And somewhere along the way in my teens, I discovered that there are generalists. There are people who actually work and operate as generalists. And so there were people, and we'll talk about it during some, maybe some of the other parts of the conversation, uh, who were definitely influences on me in that regard. And it took me in along some different paths and eventually found myself in the futures field writing and doing research for a range of different uh periodicals, clients, and so forth. And this really rang my bell. So I just uh you know continued from there. But yes, I it's been a a long and winding path.
SPEAKER_01Well, the arc of it, when was the first time you really heard the term futurist? How old were you?
SPEAKER_02Probably I think in I may be projecting or imagining, but I'd swear when I first read Alvin Toffler's Future Shock, uh, when it came out in 71, I would have been a preteen. And I swear that either in the book or in the cover or in some further reference to it, I was hearing the word futurist at that time. It was not a word that I used or applied very much for years after that, but that was probably if I had to point to a particular point in time, that would probably be it.
SPEAKER_01Yeah, many, many people, including Mia Toffler, is definitely quite an influence. I then that era is when the I think the the term futurist really became a um more commercialized. You know, you had people like Arthur C. Clarke with 2000 reading 2001, you had Buckminster Fuller, you had a lot of different diverse versus like Herman Kahn back in the 50s just doing things on nuclear war scenario planning and things like that. And we're a little, we're around the shell era, you know, with with the embargo. So it's like, you know, it definitely became a forefront and it's definitely evolved. I mean, it was the cool term, you know, because people had to bestow it upon you. Now it's like, you know, there's a lot of it's kind of everywhere, which is has good and bad things to that. Where do you, you know, where speaking of to that, where do you think the field is actually right now? You've had quite a long arc as well. Only do both, because yeah.
SPEAKER_02Yeah. As as you as you say, early on from the early think tank era post-uh World War II military industrial complex needed to understand how they were going to invest, where the technology was going to be by the time a pro large project finished, all that kind of thing. And so you had people like Herman Kahn and others working for Rand and other think tanks. You had, you know, a little later, people like Ted Gordon and uh Roy O'Mara, people who essentially laid down some of the early methodologies and techniques that we still use today. And then the corporations started embracing it a little bit more, as you say. We have Shell in uh at the time of the first oil embargo, where we had everybody was essentially hit by that, and they were the ones who pulled the scenarios off of the shelves to essentially weather it better than a lot of their competitors. Other corporations followed suit. We started seeing it futures being taught in universities, including at the University of Houston, where we both have been. It's essentially spread from there. Somewhere along, I don't know exactly when, but the 90s, the 2000s, it really felt like futures was starting to fall a bit out of favor. People started using terms like foresight analyst or trend spotter or what have you to kind of be in the field but avoid that that label. And now we're at a point where pretty much every you know uh barista and prompt engineer on LinkedIn throws the word uh you know futurist onto their onto their page. Yeah, power to them, but it does it does devalue the the uh the term. And at this point, it's like, look, you've been carrying it long enough, you just wear it. Uh so anyway, yeah, it's it's had a long uh long path, and who knows what will what will follow. I was I was living uh for a few years in South America, and prospectiva is very much a common term down there for for a futurist, uh, which is kind of a cool term. So perspective perspective. The person who who who who has perspective prospects or or uses perspective, I guess, to to look ahead exactly what the etymology is, I'm not sure.
SPEAKER_01No, that's interesting. Well, it's you know, and you talked about that everybody in the barista. I think it's almost like, you know, people call themselves an artist. I'm like, oh, like some people study it, some people are just naturally, you know, kind of good. It's just become more of with in the common vocabulary, I think, too. But it is, it's also the debate, which I think I'll have a panel in interviews with a bunch of people one on one episode will talk about uh futurist job or skill, you know, skill set. And but you are you you feel you seem to thrive at the intersections, you know. You like you talked about come up coming up through science writing, you know, technology, obviously doing foresight, and and now you've been doing fiction. So where is that? Is it kind of a deliberate thing, or did that just kind of how the work evolved for you, kind of being at the intersections? I think it's a mix. I I want to do this is too much fun. It's the field or this conversation, or both.
SPEAKER_02Let's go with both. Let's go with both. Okay, I like that. I think that there is um a lot of mental engagement and just really interesting conversations and things to learn. There's the opportunity to help people, there's the opportunity to delve into and explore areas and paths and directions that maybe no one else has done in quite that way. Uh, it makes for a really interesting and and varied field, a very varied career. That said, as you say, I came up through science writing, writing science education programs, writing cover stories for for periodicals. So, you know, that was a way to essentially explore a wide range of developments, technologies, trends, and so forth, and kind of uh you know, sharpen my my skill as I went.
What Consulting Reveals About People
SPEAKER_01That's great. And you've been and kind of transitioning to kind of the consulting side of this is there's, you know, you've you've worked on strategic summits, right? You've done a lot of corporate foresight work. You know, there's what's what's the thing, you know, I think about prep work for this. What's the thing that never is there when you're preparing to do these kind of projects or like engagements? But like what just kind of always then shows up for you?
SPEAKER_03Hmm.
SPEAKER_02I would have it's a good question. I would have to say that it's about understanding the knowledge set of who you're working with. When you come into a a room, into a place, and you begin the process, you have to make a few assumptions. And this goes with for audiences as well, about what people's prior knowledge is, where they are in terms of what you can shorthand, what do they already know? Where are their assumptions? Where are what did they not know that you need to fill in? As you're going through that, you suddenly find out or discover wow, I've overlooked this. This is not common knowledge the way that I think it might be. And so things like that can enter into a project that it seems to me is part of what you don't necessarily know or expect. And it's part of the discovery process in and of itself, the prep and the framing and everything else. So I suppose something along that line, but also in the course of that, my knowledge builds as well. Invariably, working with clients, working with people in all sorts of different settings, there are things to learn all along the way, particularly as you're jumping around between different industries and building a, you know, a knowledge base around something that's perhaps new direction for you for that field. Yeah. That's an interesting perspective.
Big History Framing For AI
SPEAKER_01I find it's the politics and the team dynamics. People can give you all the kinds of data, and then you get in this workshop room or the people come in that are different stake or that that aren't there to be entered, and they and it just I've seen I've seen it good. I mean, I've seen it throw a lot of things off. You know, it's like always those things you think about when you facilitate, but personal dynamic. Yeah, the person, yeah, exactly. Yeah, the team dynamic, who's like really the leader, who's what are the hidden things, what are the politics, you know. Right. Sometimes passiveness, aggressiveness. It's just uh and that that's this thing. Oh yeah, oh yeah. Well, let's kind of zoom out. Like, I'd love to kind of take a look at, you know, read a number of your books. I found you through, you know, I remember reading Future Minds way back, and that, you know, you kind of go from you you cover, you know, I'm a macro historian here, but you kind of go a little bigger from like the Big Bang to the death of the universe, which is a little bit wider of a of a history window. I needed a frame. Normally not the framing in a book, but what what do you need to kind of understand the breadth and the scope of something like that? Like, yeah, I mean, what to do that? So share what the book is about and kind of help get the kind of that big history lens into that. Sure.
SPEAKER_02So let's start with the idea that of both of Heart of the Machine and Future Minds are both books about the future of artificial intelligence. And both also use differently a big history framing. Uh big history is for those listeners who aren't familiar with that that term, but essentially is zooming out and recognizing that we do not we we operate in a very, very small uh proportion of all of uh human history, all of geologic history, what have you. And in the case of Future Minds, it was a book that I wanted to had been wanting for a long time, actually, about 20 years, to write about the nature of what the the nature of intelligence and how that plays into what the future of intelligence becomes. Now, as I said, had a long history and interest in all kinds of different science fields. I've written about physics for Scientific American, I've written for the future for years and years when it was still running. And in the case of this story, I wanted to kind of I was exploring where does intelligence start? We talk about intelligence and we're trying to understand it in relation to where AI is right now and what have you. Usually in terms of ourselves, we look a little bit out to the rest of the animal kingdom and and try to kind of fathom the nature of intelligence, but we are one small little data point in what the potential forms of intelligence are. And as we see throughout our world, there are many other kinds of intelligence, consciousness, and what have you. And what is it that's in common? Uh you know, what is it that links all of this? And as I start started looking back and exploring what certain different scientists were doing around complexity, around the nature of how self-organization occurs, I was taken with the idea of how we seem to see this recurring pattern, not just here on Earth, but literally throughout the cosmos, of increasing complexity in these little pockets of the universe, these little pockets of Earth, in a in a reality where we have been told, where we have basically learned and discovered that due to various forces like entropy, things we've assigned to the laws of thermodynamics, that it's working exactly against that. It's the universe is doing everything it can to tear complexity complexity apart. And here we are, and here are all these little pockets in the universe, whether it's a biome, a brain, uh a single cell, that are developing against all odds. What is allowing this to happen? And so this is why I wanted this framework and essentially built out from the Big Bang, this continuing compilation of increasing complexity, stage after stage, until you get to proto-life and life on Earth, and then essentially greater and greater levels of different types of complexity. We get pretty full of ourselves sometimes talking about and thinking about we have reached an apex with of intelligence in our in our world. But the fact is, every single creature on this planet operates on the with exactly the intelligence it needs for the particular ecological niche that it has come to occupy. And we just happen to be fortunate enough to occupy what Stephen Pinker calls, and some people before him, a cognitive niche, which happened to give us quite a bit of flexibility. But it's it's a very cool process if when you kind of start getting into it, and that's why I'm getting a little excited about it.
Skills Humans Need To Thrive
SPEAKER_01So I mean it's a it's a it's a hell of a thing to tackle in one book. I mean, it's a quite a span. I've you know, we have similar things like we kind of land in a bit in the same territory. Mine's more on the macro history of the last few thousand years and cycles, right? Change of change like Spangler or Toynbee and Sardacar. I mean, there's just so many that are, you know, standing on the shoulders of giants. Uh, you know, in super shifts, you know, I did a lot of work prior to this research around because I had studied a lot of uh Strauss and Howe and other types of all types of cycles, and I saw a pattern and convergence, and I looked at this as like every about every 200 years, right? And they've talked about the industrial revolution, a lot of everyone talks about that at the end of it. But if you go farther back, it's like, you know, the I went back about a thousand years. There's this repeated pattern of like, you know, 50-50-year eras within a 200-year cycle of an of an age. And to your point about intelligence, right? It's like we're we're kind of emerging, we're closing out the age of engines, which is, and I've said this in other interviews and other conversations is that these millennia, even just the thousand, the millennia, the time you talk about it, it's all the progress of humanity has been about human physical labor, right? The movement of machines, how we build, like it's just that, you know, output and level of uh intelligence you said at the time we have for the energy. And I think we have my hope with this next 50 years is we move to a fusion era and away from fossil fuels, just for the mere fact that it just it's basically the reason for most wars in the last half. And the thing that I wanted to kind of pose to you, like within these this I call it the age of intelligence. Like, and even after that, I called it in the book at the epilogue, the age of transcendence, like you know, we kind of transcend ourselves. When you look at this, let's just say the next 50 to 100 years, what do you think in terms of this cognitive connection display, the things that we'll need? Like you said, it it matches the time. What do you think we need as humans to to to to thrive in this kind of changing state of humanity? Easy question, right? It's like, you know, not giving you the heavy stuff, right?
Why LLMs Hit A Wall
SPEAKER_02Just the easy and start with the easy one. He's into it, right? Exactly. The softballs, right? So let me start. Yeah. The fact that while I'm I I am really looking forward to reading Super Shifts, we've talked for the listeners that you and I have talked about exchanging a couple of books here in the uh near future. Yeah. And so I'm definitely looking forward to reading it and getting a little bit uh into your framework uh with regard to ages and eras. And that's essentially in saying that, I want to also recognize that I don't have a full grasp of where you have gone with that. And then that then that's fine. We could kind of keep this at a high level on this. But I think that we're both looking at different framings of how we change change life intelligence. Changes over time. And in the case of both of us, we're looking at these recurring cycles, these recurring patterns that are developing out of this. That um that's always I think futurists love to find this. This is something that is really I agree. You know, very love our projections, of course. But we're addicted to patterns. We're addicted to patterns. Yes, the cycles, the recursion. Each of these things has an appeal because it's a pattern. So and we can latch onto it and we can develop ideas and theories around it, and sometimes they're right, and sometimes they're not. And the that that said, where we are right now in terms of what appears to be a very significant transformational point in human history. I think a lot of people certainly won't say this, want to believe it. I think I believe it to a pretty significant degree. I also recognize that we tend to be very chronocentric as a species. We tend to always think that our time, this moment in time, is the most important moment in human history, whether it's whether we were living a thousand years ago or a hundred years ago, this is why we get millennialism and apocalyptic thinking and all of these expectations that whatever the most important thing in the world is that's going to ever happen, it's going to happen during our lifetime. This is a really common thread in human thought. So I always like to temper it a little bit with that. That said, where we are with technologies that instead of replacing our muscles, our brawn, our labor, which is what we have poured into the world for most of the past many, many millennia, uh, we're we've been easing into this increasing development of an intellectual power and understanding over the world, both through the knowledge we've accumulated, but also through the devices because of that that we have developed that have allowed us to see farther, standing on the shoulders of giants and all of that. Yeah. So 50 years, 100 years from now, we could be in a really significantly different place. I'm not one to necessarily think that we have a major singular what we're going to call a singularity in 2044. You know, the there's the belief that, you know, uh Ray Kurzweil has been promoting for quite a long time, that that is when we hit the point where technology is basically able to be smarter than all of humanity, etc.
SPEAKER_01It's possible. You know he you know he pushed a date, right? He's like a he's like an evangelist when the return of of uh I miss uh Jesus change.
SPEAKER_02That he has held that for a long time. What Was what did he move the date to? 2035.
SPEAKER_01Then it was 2040. Sorry, I thought you meant later. Now it might be 2045. Yeah. So I don't know. He's not gonna live that he's trying to like take 400 vitamins to uh right to last that long. It's got to be here before I yeah.
SPEAKER_02Um I wish Will Ray all the best. Yeah, and I really respect what he has done through the years. I just for me, I guess partially, I feel like there's the whole aspect of exponential growth. I believe we're always in the knee of the curve. Basically, wherever we are on that projection, we are always at the knee, and it's always getting faster. It's just the relative to human timescale that keeps changing, keeps getting ahead of us. That said, because of that, I could argue that we're always in the midst of or almost at, or maybe never reach, a singularities. You know, when we get there, we'll know, I guess. But in terms of there's so many ways I want to go with this.
SPEAKER_01And I'd like to go in which, yeah, keep so I got a I got a couple of questions in my head I want to queue up.
SPEAKER_02So I believe right now we've been focusing a lot on generative AI, the LLMs, the trans transformer technologies, and we'll talk about it probably a little bit later. But I'm really of the belief that in and of itself, that is a dead end eventually. It can't get us to ASI. If we want to call it AGI, we'll talk about what AGI is and isn't later, probably. But I really do believe that there's so many hurdles to get over. We're going to have technologies, AI, and so forth, that can do amazing things. We already do have that. It's going to keep getting crazy better, but it's got a long way to go to get to the point where it is human smart.
SPEAKER_01Yeah, I I agree. Uh, I I think if you you and I both had the perspective of 35 years in the technology industry and looking at the waves, the bubbles, the changes that this is a this is one of the most impactful tools, I think, since the web. Nevertheless, a tool, right? It's a thing that's going to power certainly like I love Claude, like Claude Co-work and Claude Co, like I can code for the first time in my life. It's like I can build things. Like, that's amazing. But it's an it's a it's a force multiplier in terms of productivity and what you and if you really it's like people that stayed with this is like think back to the 90s, people that were still on typewriters versus using word processors on a on a computer. It's like, how do you, you know, there's a factor of so many things. But you're right. I think it's like I I believe we're gonna have to get quantum computing, which is the next kind of like the next bubble. We're gonna have to have quantum computing really input itself in terms of the probabilistic nature of like intelligence and really to because it has to think, because if you're right, is the LLMs or all the things they're trained on us, it's the limitation of us. We have to have something that can go computing-wise or that goes beyond us to be able to unlock other things, just like transformers in in AI have been a breakthrough, right? The ability to do transformers and use LLMs, but you know, people are really thinking that AGI is right around the corner. And you know, even Claude said, like, he's we don't know if it's alive or not. I just think it's a lot of hype. I mean, you you don't you don't buy that narrative. I don't really what do you what do you think? Because you just said it was like, what do you think? What do you think is missing? What do you think is missing in this? Man, there's so many things.
Why “AGI” Is A Misleading Label
SPEAKER_02What what is we already mentioned a few things already, but yeah, but you and I have chatted before around some of this. So, you know, I I mean we can't kind of both know where we're going with some of this anyway. But let's begin with the fact that there are many people who are pushing a particular narrative. They're generally speaking, the people who are leading some of these major AI companies, how much they believe in the narrative, how much you know they are using the narrative. This is the most important thing ever. This is we're leading to AGI. If we don't deal with this, you know, we we're we're the ones to save the world. We have we have to develop this how. Yes. No, this is what this is what so many inventors have believed throughout history. Whatever they are building or working on is the most important thing in the world. And it's up to them to help develop it to save the world. This is a really common call it egotism, but it's it's a very common trait to greater and lesser degrees through the years. But in terms of, let's start with AGI, there is no AGI. AGI means artificial general intelligence for the listeners who don't know that acronym. And what people generally mean when they talk about AGI is they mean something that's equivalent to human intelligence. Well, as I was saying before, we occupy a particular niche in the ecosystem. We are not a general intelligence. We have very, very specific limited intelligence.
SPEAKER_01If you'll tell you, just as a side note, if you've ever if you ever watch Blake Bodycam videos on YouTube, you definitely know we're all not of the same level of intelligence. So that's true. People make bad, a lot of bad choices.
SPEAKER_02Think of a bat or an a dolphin living it's it's its inner life, its world, its consciousness is entirely different because it has echolation. It has literally has the ability to visualize the world around it in a way that we can't even fathom. We do we always try to create it, you know, imagine it in an analogy of our own hearing and and sight, but honestly, it's nothing like you know, consciousness and intelligence is different for every single species, and it will be different for artificial intelligence as well. In terms of where LLMs are, you had made the point that it can only go so far because it's only built on what we know, what we've poured into the internet for the past 30 plus years. And it's a huge amount. It's it's it's amazing, and it's incredibly deceiving because we anthropomorphize so readily and so easily. So when we hear this thing echoing back at us, the stuff I love, I still love it. I know it's a little tired, but uh the term stochastic parent is is a hilarious term that was created uh, I don't know, two, three years ago, uh, referring to LLMs. And the fact is that all they're doing is to using statistics to reflect back on us all the things we've poured onto the web for the last 30 years. This is a small subset of what human intelligence is. We go through the world and we learn from experience. From the moment we are born, even before, we're experiencing our environment in different ways, the warmth, the sounds, eventually moving about in it, acquiring physical understanding of the world, intuitive understanding of physics and all of this through our interaction with the world, with other people, learning relationships. AI doesn't do any of this. But something that you had mentioned earlier made me think of it. So I've been writing recently about the many different ways that AI is going to essentially move ahead with some new and different approaches and technologies to take us beyond where we are with LLS. Just today, one of those fields, uh John Lacoon, who recently left uh Meta, he he was the head of uh FAIR uh Facebook uh artificial intelligence research. He he left them I don't know a few months ago and he just started AMI, uh augmented machine intelligence. Anyway, to just today it was in the uh the news that they got uh 1.3 billion uh funding in this uh you know this initial round. So that just that just broke. So yeah, no, he's got he's got clout and he he is someone I truly respect in terms of where he is going and recognizes the limitations of the methods of AI that we've been developing. This is this is one of the, I don't know why they use the term godfathers, but one of the godfathers of AI, who recently, uh a few years ago won what's essentially the Nobel of of artificial intelligence. And he literally has been in the field going back into the 90s anyway, and developing some of the early neural nets that eventually became um, you know, led into where we are today.
Machine Consciousness And Personhood Tests
SPEAKER_01That's well, and you know, speaking of him, and and you've you recently told me in the their pre-interview call about this uh paper that you wrote about consciousness. It's a consciousness journal. I know it's coming out soon yet, but kind of a preview of it, uh, you know, if obviously not, you know, if you're a philosopher, this is great, but it's like, you know, are AI systems even conscious? Like what is that, how does that even like get to a definition of like you said, it's like there's no AGI. Like, what does that even even mean to be, you know, for them to ask about, do they feel pain or like they're asking about themselves? They worry about their demise? Like, what does it mean in the conscious, in the realm of what is a consciousness for an AI?
SPEAKER_02Sure. So this is uh coming out with Springer, I don't know, exactly sometime later this year. The journal on consciousness, and I was invited to uh write a piece. Essentially, I wanted to explore what it might mean for AI to essentially achieve some level of personhood. Is it possible? Does it make sense? And what should we fear about it? What are the signs, essentially? Consciousness is one of those things that is a huge mystery in many respects. It's probably of all the things that we humans know and think about, it's the one thing we really don't have much of an idea of where it is. Chalmers, many years ago, philosophers, I think why you now wrote a paper about the hard part problem of consciousness. And essentially, this is talking about the fact that when we experience the world, when we experience a color, when we experience a sound, why does that have this subjective nature for us? Why it why is it anything other than the equivalent of light hitting a sensor on a robot? It can conjure memories, it can conjure feelings. We have this entire internal subjective life that occurs because of something about our minds that we don't understand. And there are easy problems of that involve consciousness, and there are hard ones. And this one's definitely a hard one, and it's not the only one. But when we talk about the nature of this subjectivity, you and I know, we hope, each other is conscious. We don't know for sure. I understand what my subjective life is like, you know what yours is like. We do not know what either of us truly experiences. We make conjecture, we have ideas, we analogies, because we are the same species. And this is the case for all eight billion of us, but do animals have consciousness? Probably some of them, different levels of self-awareness. We've done tests and so forth. But when you get to machines, there are all kinds of things that we ascribe a personal nature to. We anthropomorphize like crazy. We have from almost our earliest days, because it gave us survival capabilities. If you recognize or believe that parts of your environment have agency, it makes them a little more dangerous and you're a little bit more aware of them. Is that a stick or a snake? Well, I think I'm just gonna, you know, give it a wide berth either way. The when you talk about so many aspects of what we create and and build in our world, and the number of things that we believe are conscious or have, we we we treat them like they're people, boats and guns and cars and so forth. We we've been doing this for forever. Um people give their cars names. They give the boats names. I talk about this a lot in not a lot, chapter in Heart of the Machine. But when we talk about these things, and then you talk about AI girlfriends, you talk about interacting with chatbots, we start believing that these things are conscious, even without them being conscious. This was a problem going all the way back and probably earlier to um Weisenbaum's uh Eliza chatbot that he built at MIT in 1963, 1964. You had students and and co-workers who were using it, talking to it and interacting with it as if it was another person. And this is telling.
SPEAKER_01Are you saying the Turing test was beaten uh were 1900?
SPEAKER_02You wanted to use that as I mean, let's face it. It wasn't formalized in that sense, and the Turing test has been. Right, well, they knew it was a machine.
SPEAKER_01Right, exactly, right? I know. Yeah. So they if they told him this is a terminal and you could see like somebody's on the other side is gonna answer your questions and it to see if it fooled them, yeah. You you could have uh through like real rigorous testing, you could have proved the Turing test 50 years ago, which is just or 40 years ago. Yeah, 50 or 50, 60 years ago. It's at least, yeah.
SPEAKER_02Yeah. So it's it's a it's a hard question, and trying to understand what we need to do to be sure a an AI was conscious is part of the the paper. But then once you get to to the end, it's like, well, wait, if you can't prove something is conscious or not, will we ever know for sure? But should we ever get to that stage, then we've got all of these ethical conundrums. Basically, are we creating our overlords or are we creating slaves? There is no almost no in-between in some respects. Digital serfdom, digital feudalism.
SPEAKER_01Yeah, digital serfdom with digital feudalism. Who knows? Yeah, yeah, I well, yeah. That might be a whole nother couldn't be a different uh podcast episode.
unknownYes.
Automation, UBI, And The Job Question
SPEAKER_01No, I believe we we are we are heading that way, which does concern me a great deal. I hope we can buck that trend. Me too. Me too. I think about that kind of I'm sorry if you're gonna finish. I think I think that where this transitions to is like how this in our work and the work of other people listening, it's like the future of their their jobs, where they work, the nature of work. I don't think artificial superintelligence replaces things like that. We have to solve the job problem, right? The job question, right? What is it, what is the nature of work if you automate so much? Does it leave room for new things, new discoveries? But are there there's always gonna be a part of the population that wants to maybe just subsist on basic or UBI? Or just, you know, what does that mean in terms of what is rewarding? You know, we've like you talked about the span of history, we're a tribal people. We we are social, the the social fabric of what it means to accumulate wealth and status, like will that change? You know, don't know. But a lot does have to change. You know, we can talk about the education system as well, but I call them safe futures because like doing foresight with artificial intelligence only using an LLM, you're only gonna get out what, like you said, stochastic parrot telling you kind of what you want to hear. So you have to be there to really push the edge. Like, how do you how do you work with clients? Like, how do you get people to push to the edge of their mental map without without without having them falling off the map? Right.
SPEAKER_02Yeah. I think it's a you know, it's a balancing act, like like so many things. You're you're trying to recognize that if you go too far out there, if you go too long, too far, you lose people. You have to kind of keep bringing it back to a certain degree to relevancy. Where what what are we what we're you know, how is this related or relevant to the particular aspect we're exploring here? You know, it's it's not about necessarily staying safe so much as kind of keeping keeping the perspective.
SPEAKER_03I guess that's how I might say it. How about yourself?
SPEAKER_00I get people to think about the worst thing that could happen.
SPEAKER_01Get them go to go so far to see where their wildcards are, see where their black swans are, and see what the limits what their horizon, what their horizon limits are, and then challenge them to go further and then kind of pull it back. Then it's like it's kind of the stretch goal in that way. It's like push them farther than what they're even capable, and then bring them back to a safer space, but it's beyond the safe futures, is what I would call it. Right.
SPEAKER_02There's also I'm sorry.
Future Ready Education And Upskilling
SPEAKER_01Oh, yeah, no, black swans, just be things that would just happen, you would not expect like COVID, right? A black swan is a pandemic, it's a systemically challenging event that changes the nature uh overnight that is not it's not expected. You know, it's the rare thing that happens, like a black swan appears and a you know, a bunch of white ones. Um you know, and I think about the audiences too, is like if you're getting into like geopolitics or the type economic or other, you're gonna have some type of division and difference in that. And yeah, in this day, and it's like navigating that is I think one of the more challenging parts of it because there are people that want to push DEI as a as a policy, as an approach, and others that are just like this is garbage, and it's just horribly biased. And so you have to navigate and you have to raise up everyone's opinions. They are all nothing's invalid, but it's a question of what fits for the future, and you can put them in different, you know, places so that people could see the possible futures and explore them. And I think that's where it has its power. But, you know, you and I have talked a lot about you come from a family of teachers. So we both have the same frustration. I have a nine-year-old that I desperately want to homeschool, and my wife, my wife is good with that. My sister homeschooled her kids. You know, education's, you know, it's the it's again, it's like the age of intelligence now. We've the tailorism, Fordism. I mean, if you look at an office building now, you just take the computers away and you put sewing machines, and it's still the same. I've always said this is that the model from the 19th century was to get the whole point was to get people to be literate, to do math, to do basic things so that they could go back at an eighth grade level and go to the farm or the factory and be work and work and work. And college was an elitist, very rare thing. It only changed after World War II. But you know, what do you think it looks like for, you know, to be what is a future ready education system look like? Can we even do it with I think do the institutions have to be completely just deconstructed? Do it? Can they can they make the change?
SPEAKER_02It's going to be hard, and I think it's going to be a big challenge to get everybody on board. I think it's going to be a I think it's going to be an incre incremental process. And it's going to be probably driven significantly by if we get to the uh workforce issues that you were touching on earlier with regard to so many jobs going away or so many jobs being displaced on a rapid and recurring basis, that the way that we educate, the way that we fund education, the way we certify education, and the way that it is essentially provided both within the workspace and outside, will have to shift, have to change. I've been a lifelong learner all my life, but I do believe that we we have pretty much cultivated a society that, you know, we grow up thinking, I'm going to go to school until finite period of time. I'm going to graduate from high school and I'm done. I'm out of here. Or I'm going to get my bachelor's or I'm going to get my master's. And then people pretty much finish with minor certifications along the way.
SPEAKER_03Yeah.
SPEAKER_02And, you know, until 40, 50 years ago, you pretty much had a single career, often a single job, an awful lot of your life, at least for a certain subset of educated people, and they would go and work for a particular company. And then that incrementally changed. We now have, you know, we've seen the the patterns, the the studies where, you know, certain cohort has X number of jobs in their lifetime and it keeps in increasing as we move along. The fact is that we're going to have to learn new skill sets. We're going to have to upskill on a regular basis. And where and how do we do that? You can't go back to school for another degree at age 40 or 50 when you have a family, you have mortgages, and it's so disruptive on so many levels, but it's also probably not a great idea if by the time you finish your degree, it's going to be antiquated or it's that field's going to be gone. So, how do we upskill in semi-real time in a way that one provides us a continuity for our employees, but also for the companies. We have all of this institutional and legacy knowledge that develops and builds over time, especially with a longer-lived company. It's not as evident with a very short-lived startup, for instance.
SPEAKER_01But look at McKinsey, McKinsey's 100 years old this year.
SPEAKER_03Right, right.
SPEAKER_01BMW build over BMW almost the same. Yeah. Yeah. I mean, that's a hundred years, a century.
SPEAKER_02We can't keep losing that knowledge every time that there's a downturn of, oh, we're going to have to let 20% of you go. Well, wait a minute. What if we treat our source, our human resource, like exactly what it is? It's just we don't treat it is there's only one way to utilize a particular person and skill set. I think there's a lot of different ways we can approach this so that we end up recognizing and and utilizing probably the most valuable resource that we have. I mean, money, energy, resource, you know, fit material resources, these are all things that up to a point we can get readily. Human knowledge is very specific. And I think we're going to find in this age of developing artificial intelligence that it can't do everything that we're being promised. Not in the short term, not for decades, probably. But we will need a level of hybrid intelligence, not just for training the AI, but for working in conjunction, working in concert with it, so that we are essentially utilizing the best parts of both partners, both co-workers, cobots, however we want to refer to them, so that AI does the laborious things, the repetitive things, the things it does at crazy speed. And we bring intuition, creativity, relation capability, all of these things that AI doesn't do well. So and not to mention double checking some of the crazy stuff that it does these days. Yeah.
Mindstock And Writing As A Futurist
SPEAKER_01No, it's a great, that's a great perspective. I think it's a lot to behold. I think we're gonna see the radical change in the next at least five, definitely ten. It's it's going to I think things are gonna fall apart. I think they're gonna really fall apart. And there's gonna be s a lot of experimentation. Some people will try and again hold on, but I think it will yeah, I think it's going to and that that for me is, you know, there's a lot of speculation in that. And speaking of specul yeah, so speculation wise, I want to talk about as we're you know, the kind of the winding things, you know, to the close here, is like so you've got a book that I think is so cool. So you wrote Mindstock, it's a techno thriller, so it's not like it kind of takes it away from the futurist, you know, books and things you you wrote before. I myself am that's my my goal is to to write science fiction and do more of that. I'm kind of a student of it. What kind of helped you with futures work like doing this? Like how did it how did it inform? Well, so kind of give the background of the book, kind of how it came about, and then how it being a futurist kind of helped you write something like this, which you think is a very different way of writing.
SPEAKER_02Well, we've been back and forth in this a little bit, but Mindstock is a culmination, let's call it a culmination of a of a dream. Uh while I've written most of my adult life, and I guess before that too, uh most all of it has most of it, at least most of it that has seen the light of day, has been um nonfiction in nature. And that's great. That's fine, it's exciting. I love doing it, but I've always wanted to move into fiction as well. Fiction is an entirely different animal. And I had to learn and unlearn so much in the course of getting to that point because all of the habits, all of the methods that I've learned with writing nonfiction didn't work. It's very linear in its nature. And I guess that I tend to talk about and think about fiction as being very iterative, very going through the processes and interacting with the characters and the world and the themes and the plot and the dialogue, and this doesn't work now because I changed this over here. And it's very differently cyclical than something like writing nonfiction. You had asked me in terms of how does my fiction, you'd asked me before, how does my fiction inform my futures work? And I'm not so sure that it does at this point. I I I mean, everything we do and learn informs, but it I it's not really something I bring to the futures work directly in that sense. That said, the other direction, if when you read it, but also with when many other people have read it, they're they note and it are taken with the futures view. The futurist view. But it's following something that you know I'm I'm familiar with a lot of different technologies, a lot of different directions that they can and will go. And it's fun to speculate. But as I tell people, that's something I do for the fiction. It would be irresponsible to speculate to that degree for you know, futures work for a client. It's they don't really want you going off for a year and and uh creating this other world. And and uh, you know, it's really more about application in the case of doing professional.
Restarting Civilization Thought Experiment
SPEAKER_01You know, I and you know, as we as I like to do uh at the end of the uh show, like to ask sometimes some rapid questions. One I asked, depending on the conversation, I always think about the the movie uh the time machine or the book too. It's like so the time traveler, uh, you know, and he goes back back into the future to be with Mina and help the Eloy rebuild. He takes like a couple of books with him. Yeah, and Wells, that's his one of my favorite authors. Um he you never know what he takes. So like I always like that the restarting civilization, especially when you look at the span, you've seen the span of it's kind of appropriate for you if you had to kind of reset things. So, like two books, two pieces of music, two objects. Like, what would you bring in your bag? I'd love to ask.
SPEAKER_02I thought you were gonna give me the other uh the uh alternate question. You know, I really have a problem with this one. No, I want you to answer that. Well, we'll we'll we'll we'll do that. Yeah, do both. Um, yeah. So I have a problem with this in part because it's really hard to uh first of all, music. I can't think of a better way to just get sick of a of a song or or a piece of music of listening, having that the only thing to listen to for you know 50 years, whatever. But but let's just go with that. In terms of books, part of me wanted to go with something like Copernicus's uh was a revolution of spheres, something that kicked off the scientific revolution, essentially. Something along that line. Then I thought about somebody like uh Jacob Brunowski, who had written the uh actually he did a BBC series, but he wrote a uh companion book back when I was a teen that I just was so fascinated with. And he was probably one of the first generalists that I ever knew of. And you know, he just had such a broad view of how the history of science, essentially. Then I thought, well, how about something practical, like one of those books, how this is made? Big big volume of that might actually come in fairly handy to restart civilization. So it's a few different possibilities there, but yeah, not really looking to what do I want to read over and over and over again for the next 50 years?
SPEAKER_01Just as good an example. No, it may it it's interesting what people have said in the past, but it's it's uh it definitely it kind of lets get you into the mind's eye. So the alternate question that we talked, we just put in there, what's one book that most changed how you think about the future, which is a good one?
SPEAKER_02Right, right. Well, again, uh I touched on it earlier. It was if I had wave, if I had to go back and think about, I mean, there's been so many books that have really meant something to me. But in terms of, you know, a a 12-year-old Richard reading uh a book like Future Shock that really reflected some really interesting things I was already observing about what in studying the the nature of invention and the progress and the what seemed to be like I'm seeing as I go through these repeating the increasing speed at which different new technologies seem to be coming online through the last 200 years or whatever. And as I got a little older, I was thinking, yeah, this is probably just a perspective thing. This is you know, just because of myself, but also the the writers who are compiling this, they're paying more attention to the more recent inventions, and we see that still to this day. But because of observations like that, because of reading things like Future Shock, it pretty much colored how I view the world and and futures work ever since.
SPEAKER_01Great. That's great. So kind of wrapping things up here. So let's think about the long term and looking back. Legacy, yes. Future minds, heart of the machine, mind stock, decades of work. You know, it's like adding things up, it's like a life and a career, you know, well lived. Like, you know, what's the through line? What do you want people to get the perspective of rich, you know, Richard's Richard's impact on everything?
SPEAKER_02My perspective on it is I'm just getting started. I've got uh I've got a lot of books still in me and honestly. I've got to write it quite a few more of them in order to be able to connect all of these dots to figure out what that pattern or or legacy might be. Honestly, I'm having a blast doing it.
Where To Find Richard
SPEAKER_01That's great. Well, I just want to say it's been such a pleasure to have you on. And um hope everyone had a great, great time listening to this conversation. It's quite a quite an adventure through the universe as I like this. That might be uh that might be part of the uh the title for uh this episode. But uh just uh want to say thank you and uh kind of so people want to find you. Like what's uh a couple things? We'll have them in the show notes too. But I know you've got LinkedIn, your website, but where where's the best place to kind of start with with with that?
SPEAKER_02So yeah, thank you for that. We'll have information in the show notes below. But my website is literally RichardYonk.com. I think you can see my name on the bottom of the screen. Yon C K. Yep. So Y-O-N-C-K and just oneword.com for my uh professional site. And you can also find all of my books on there, but they're all available on Amazon and in libraries around the world and in various other bookstores, Barnes and Noble and what have you. Um that's great. Future Minds is out just this past past several months ago, and it's the first in a series of four, uh, working on the the next the next book at this time. And essentially that's that's where you find me. And if uh people want to have further conversations or reach out and engage me for either work or talk, please feel free. Great. Thanks, Richard, and until next time. All right. Thanks so much, Steven. Take care.
Final Thanks And Show Outro
SPEAKER_03Thanks for listening to the Think Forward Podcast. You can find us on all the major podcast platforms at www.thinkforward.com as well as see you next time.