Think Forward: Conversations with Futurists, Innovators and Big Thinkers

Think Forward Ep 113 - Future of Systems with John Smart Part 1

Steve Fisher Season 1 Episode 113

 Welcome to Think Forward Show Episode 113: The Future of Systems Part 1 featuring John Smart 🌍🔧

I’m Steve Fisher, your guide on this futurist journey, in this special two-part episode, I sit down with futurist John Smart to explore systems thinking and how it shapes our understanding of the future. Whether you’re in strategy, product innovation, or a foresight enthusiast, this conversation is packed with actionable insights on:

🔧 How systems evolve and adapt

📊 The balance between innovation and protection

💡 The role of technology, biology, and social systems in shaping the future

🚀 Big ideas like acceleration studies, evolutionary development, and the future of digital minds

John’s unique perspective on foresight will challenge how you think about systems and help you better prepare for the next big shifts. This is one you don’t want to miss!

🔗 Steve’s Site: www.stevenfisher.io

🔗 Episode List (Lite Version): https://lnkd.in/eAVcg6X4

🎧 Listen here: 

Think Forward Show (Light Version): https://lnkd.in/eVBVJRCB
Think Forward Show: www.thinkforwardshow.com
 
Thank you for joining me on this ongoing journey into the future. Until next time, stay curious, and always think forward.

🎧 Listen Now On:

Apple Podcasts: https://podcasts.apple.com/us/podcast/think-forward-conversations-with-futurists-innovators-and-big-thinkers/id1736144515

Spotify: https://open.spotify.com/show/0IOn8PZCMMC04uixlATqoO

🔗 Web: https://lnkd.in/eAVcg6X4

Think Forward Show (Light Version): https://lnkd.in/eVBVJRCB

Think Forward Show: www.thinkforwardshow.com

🔗 Steve’s Site: www.stevenfisher.io

Thank you for joining me on this ongoing journey into the future. Until next time, stay curious, and always think forward.

Steve F:

John, welcome to the podcast.

John:

It's an honor to be here, Steven. I love the way you think and the work you've done. Your practical approach bringing organizational foresight in the Futures and foresight practice you built at McKinsey. It's wonderful. Thank you so much for having me on.

Steve F:

Thank you for the kind words. I've been following your work for the better part of 20 years since I've been doing this as well. Watching you become this true thought leader in the space, and it's What's been great for me is I followed your work all these years, but we actually only met about six months ago We found quite a quite a kinship and brotherhood in our approach Which we met for those of you on the podcast. There's a gathering at University of Houston every year in the spring It's called the spring gathering and this next one coming up is going to celebrate the 50th anniversary of the program at that Andy and Peter Bishop built. So it's that's where we met. It's a great, if you're a futurist or you're interested in this kind of stuff, it's such a great way. And if you can't afford to go to Dubai this is just probably the next best thing. And I wanted to welcome John to the podcast, for those who don't know you can you just give us a bit about your background, how you became a futurist? That would be great.

John:

Wow. I guess I became it through just proclivity of interest in thinking about the future. It was a game I played in, starting at five or six. One day I was bored and I started realizing, wow, the future is complicated and I can tell stories and my parents can critique them. And so they, they were indulgent parents, basically. Boredom and indulgent parents allowed me to start thinking about the future and talking about the future. Trading stories. Today we call them Weebles stories. If you remember Weebles wobble, but they don't fall down.

Steve F:

This is a toy from the 70s, so all you millennial and Z, Z out, Z gens.

John:

Yeah, it's an egg shaped toy with a weight on the bottom and you push it and it wobbles over and then it snaps back up. And that's what a good future story is. It's been well critiqued. And most stories about the future, a good subject matter expert will poke a hole in it and find some real problems if you widely critique it. But then it'll get back up. If the trends and the data and the models support it, they'll keep getting back up. Like all the stories we were telling about self driving cars and computers we talked to in the, 90s and the early 2000s, 20 years later, They happened because that was a Weeble story. We should have been telling that story, critiquing it, funding it. And so I started doing that as a kid. Then I became, I went to the Haas School of Business at Berkeley and started a business, actually started three, a third one, finally. Did well enough that I could sell it to my partner and I sold it to Princeton Review because they took our it was science tutoring and test prep was all up and down California. It's called hyper learning. So I got a really good exit and there was mid career what I want to do next. And I went back and reminded myself of my high school passion. of thinking about the future. And that's when I found the Houston program. And I joined the, it's the oldest, started in 75 training futurists, oldest one. Now there's 27 of them around the world, or you can get a master's or a PhD in what's called strategic foresight. And I'm sure we're going to get into what that is, but so I started there and I got a master's and Started an organization, a non profit, that was focusing on, if you go into Foresight one of my mentors, Joe Coates, the great Joseph Coates, who ran the Office of Technology Assessment for the federal government, was a tech futures advisory, bipartisan advisory agency that the U. S. Congress stood up to help it think about futures of technology and society and help them with policy. But he said, you've got to start with a client. Start with a sector, start with a client. So I started with this topic of accelerating change. Which to me is what the most, one of the most interesting and neglected topics in the foresight space, which is what things go faster every year and why. And what things go slower. And that led me to Moore's Law and Eroom's Law. Just to give your listeners an overview, Moore's Law started in 65 with Gordon Moore, founder of Intel. And he said, in this special computing space, every two to three years, computers get twice as powerful per dollar because of this magic shrinking game that we're doing. And so you get more and more using less and less. So you get this efficiency.

Steve F:

Okay.

John:

More and more dollars to get the same output. And the famous example started with the dollars you have to spend to get a drug through the FDA. And in 1950 the data, when it started every nine years since 1950, you get half as many drugs out per billion dollars you spend. And the reason is because the quality of life or the value of human life goes higher, the oversight, the regulation, the social safety requirements, because the world's more complicated. And so building a big bridge, an aircraft carrier a new institution, anything that's complicated in the human space actually slows down the richer we get in the output, in its output. So you've got the space of the large where things are decelerating in terms of productive output and many kinds that we can measure. And that's probably a good thing in general. And then in this very special set of, let's call them the levers that are creating driving acceleration the, which is, this would be digital information and nanotech. So the growth of information, the growth of computing, the growth of sensors, the growth of digital twins, the growth of AI, and in the nanospace, things that don't require a lot of oversight. In genetics, some of the fastest Advancing areas of genetics, which is also nanotech, right? Are going to be in agrobiotech. Because you don't have the same oversight for experimenting with agricultural products as you do, say fiddling with genes in a living organism. Or, sorry, in an animal versus a plant. And so we've seen these pretty astounding advances in, engineering yeast, etc. That are, they're more like Moore's law type things. And of course, reading genes as our friend Ray Kurzweil said nanopore sequencing, that's accelerating. So I learned early on that there were these kind of two, there's these two futures, right? There's what we can call inner space, which is the space of simulation and what I called at the time, space, time, energy, and matter, density, and efficiency. Okay. Thank you. So if you can localize and miniaturize critical things, you get these crazy accelerations. And then there's the outer space world, where things actually slow down, And it's not only an S curve, some of these things in the human space. It is a what we call a life cycle curve. So things actually slow down, saturate, and they hit a peak, and then they go down further. And so pollution, climate change, human population use of critical resources, all of those were actually in what I call an age of peaks. We're very close to peak car, globally, because people just want to use these, transportation networks. We hit peak steel in 2010, because our recycling is so good now, around the world, that we're digging less iron ore out per, for the whole species, than we used to. Other things aluminum, we got to recycling earlier. Plastics, we're going to get to. We're going to actually hit peak plastic at a certain point where peak oil demand is within sight. Some people said we actually have hit it, but others have said no, it's probably, it's within the next 5 to 10 years because we're doing this massive electrification. So we're going to leave, just like we left coal in the ground, right? We're going to leave a lot of oil in the ground.

Steve F:

That, I was going to, before I get into the work as a futurist, the interesting talk about these peak, what did you call them?

John:

Age repeat.

Steve F:

age of peaks. Doesn't there also, what would the, be the impact of a, maybe a larger shift into another type of, I don't know, exploration in space or another type of industrial. Shift that would cause the peak to then move further, right? It's if I don't need you've we've gotten it optimized for steel or aluminum But if we're going to build a lot of rockets

John:

argument here. And this argument goes back to, so my interest as you may be able to see here is big systems and understand. I started with understanding acceleration and then I moved into understanding adaptation, how complex systems adapt, right? And so I think there's two major things going on. There's evolutionary exploration, and there's developmental optimization. And we think of our own bodies as a perfect example. We have our children, I'm holding up my left hand right now, and my fingers are spreading out. And this is what evolution does, right? In living systems, evolution explores. So this is Darwin's tree of life, right? So what you can say about the future is apropos to what you just said, constant experimentation, and there's going to be new areas where you're going to get this diversification and acceleration, right? And then, but that's always balanced in the living system by, I'm holding up my right hand, and I'm spreading my fingers out and reading left to right. It's not a, it's not a tree that's expanding in a developmental system. Development is taking all the chaos and diversity out of a system and hitting a future target with high predictability. So you've got unpredictability and predictability. And so from a physicist's perspective, the universe is an unpredictable, predictable system. It's such a crazy paradox. What's predictable, obviously, is the laws of physics, laws of chemistry, laws of biology, laws of society. We were just talking off pod about these secular cycles and, Peter Churchin's work with Clio Dynamics, perfect example. Carlotta Perez's work, Technological Revolutions and Financial Capital, her book on that, perfect example of these highly predictable large scale cycles in the complexity of the system. That is not an evolutionary signature. That is a developmental signature. So when you say, psychological development, economic development, social development, planetary development, you're using the D word and you're talking about the predictable features or let's call them probabilistic features, right? And then, but at the same time, you have to talk about all the possibilities and all the new emergent levels. So I would argue that's a super long response to your question of where does the next explosion happen. If humans, if human reproduction is peaking, would argue the next explosion is in digital minds, robots sensors. It's gonna be these digital quasi living organisms. that also are replicating, evolving and developing, but they're using less and less resources in every replication cycle. This is the really interesting thing about accelerating change because of the stem density and efficiency we talked about. See, our Children are going to use the same resources and replication as we do, or even more. Because they have more abundance, they have higher goals. Digital and technological systems don't, do not do that. The critical ones, the ones that lead, they dive into this inner space world. That's why quantum computing is so much more crazily efficient than classical computing. So if you can figure out how to take a process in your organization and localize it or miniaturize it, you're using this stem. Density, right? You're using these tools. That's why cities totally outcompete the urban environment. That's why corporations and their supply chains totally outcompete the previous loose federations that people used before they had supply chains and containerization and all those things. So that is the, we call that densification of the world, right? Density is beating sparsity. But the other thing we learned in our acceleration studies foundation as we were studying this is there's this other term, dematerialization. What is also happening with this digital acceleration is not only are we densifying with localizing and miniaturizing, Wherever we can. We are dematerializing a lot of processes in our world. Things we really care about. In two ways. We're either digitizing, which is creating information that measures the physical world and can start to substitute for it. Or simulating, which is the process of taking a goal, taking a value, taking an objective, creating a model that we can explore in what we call fast space. Totally outcompete slow, expensive, risky, simple outer space with fast, efficient more more conscious, more deeply able to both explore and predict inner space. And that's what the human consciousness is. It's the most densified and dematerialized thing that we know of in our universe, certainly on Earth, right? It's incredibly, deeply, a simulation machine, more than anything else. Three pounds of this very special electromagnetic structure that's thinking at 100 miles an hour, with 80 trillion unique synaptic connections inside of each of our, between each of our ears. There's nothing as complex as that on the dematerialization side, and that's why human foresight, applying and applying. Our thinking to where we want to go next using brains, educated supported optimistic, motivated individuals totally outcompetes any other process of looking ahead. That's why democracies outcompete autocracies long term, because in network theory autocracies have to deny power and simulation capacity. Down the hierarchy, whereas democracies are creating this amorphous network where each system is both cooperating and competing. So it has much more capacity to simulate as a network, right? What we're going to talk about is this idea that individuals, organizations and networks are the three fundamental ways complex systems adapt, right? And so this process, this democratic process that we use in the West, That's what we're Long term, I would argue, is way more adaptive than an autocratic process like we're starting to see in China, and we saw, of course, for the longest time. And communism versus capitalism in the, in our parents era, in the cold war. So I think that these are like super useful. These big picture models are super useful for helping us get an intuition for what's most adaptive for us. So as an organization, I want to be thinking about my individuals. I want to be thinking about my group, my organization as a group with shared norms. But then the fusion of those two is I'm embedded in a network with unique individuals and organizations who are both competing and cooperating. So what's my network strategy? How am I building the network or the, what we call in the tech space, the platform, right? All the platform plays by the tech Titans. What we've seen in the last 20 years, certainly since the iPhone was invented platform plays totally crushing the individual and the group strategy. It's the network strategy. How can I be. If not, I'm at the center of that hub. How can I be one of the h one of the critical hubs in this digital network that's emerging? And so intangible value my friend Kartik Gata, the venture capitalist, he says, intangible value. What is the intangible value you're creating in the network with your strategy? And does everyone else in the network see it and use it? That just totally out competes the tangible space, because that's really last generation.

Steve F:

You had mentioned a lot of things, Excellent kind of breakdown of the nature of systems is you had Acceleration Studies Foundation for a long time, the ASF, which you mentioned, which is now the Evo Devo Institute, but that leads us to a lot of what you've started to talk about with systems is evolutionary development. Many people might have heard the term, but they don't really know what it is. And I think. Explaining that, connecting that to what you've just shared, and the ability to taking those concepts as it relates to also doing foresight and futures work, and then, giving people a grounding in that. With what you just share with systems, we're about to take a new leap, I call it a new age, I think, in humanity. Yeah, could you elaborate a little bit on that, like what your

John:

Absolutely. And I'll do it from, I'll try and do it from the perspective of kind of the strategist.

Steve F:

That would be great, because I think there's going to be people listening, some are scientists, some are researchers. Many strategy people listen to this podcast. Obviously many futurists, but people in innovation functions, in strategy functions, in product functions. And I think is it, how does it relate into that as you go through the explanation relating to their work, because they'll look at his theory, but it is also is their application.

John:

Yeah. Great, thank you. Yeah, in my book so I wrote a book took me six years with a team in my little consultancy Foresight University which is an arm of Evo Devo Institute, is we realized that early on, That the great Alvin Toffler got to these ideas, the 20th century futurist, who many people still have his book,

Steve F:

Oh, he's what started me on my journey. When I was 13, I read future shock went out in 1984 and I was like, oh, you can do this as a job. Oh yeah, no it really did change my life. I was a science fiction fan and I read a lot of hard science fiction and people have listened to the podcast, know that about me, but. His work with his wife and I think heidi gets doesn't get recognized as much

John:

I immediately, you totally upgraded my SIM. What I should have said, I should have said the great power couple, Alvin and Heidi, because Heidi did all of his research. Heidi did all of his, Heidi did all of his critiquing. She took his stuff and she would bleed red ink all over it. And that's the, one of the central themes I realized when I was looking for fundamental models in this book. It took me six years to write it. I didn't start it until I was almost 18 years into my work as a futurist. My second career, basically, and I'd finished my master's at Houston in strategic foresight. But was trying to put, I wanted to put fundamental models down. And so my book is called Introduction to Foresight, and it's really aimed at students, but not just students in these, 27 academic programs, but students people who consider themselves autodidact, autodidacts, students of life, people, a lot of our great leaders, a lot of our great strategists, we're constantly learning. Just in time learning,

Steve F:

I'm pointing at my hand like that. Yeah that's a, for those of you, I think on this podcast or probably that's a default, you may not even know the term if you're a lifelong learner and you always pursue knowledge, irrespective of a degree pursuit, but just the fact that you just want to learn, there's always things to

John:

Totally, brother. My book, so my book is aimed at those people which is all of us, really, because it turns out that learning, foresight, action, and review, this is called the LFAR loop, is actually built into human psychology. So there's, in the last ten years, there's been this whole psychology of foresight field that has emerged, and I quote a lot of the studies in my book. And that is the psychologists who are saying, how do we actually look ahead? And first we learn, which means we look at the past and the present that's relevant. Then we look ahead with foresight, which we'll talk about in a minute. And then we act and then we review. And all of us do this. And some of us are a little stronger in one set of steps and weaker in the others. I'm strong in learning and foresight. I don't necessarily act and review as much as I should, particularly review. But the better I get at following that loop, it's called the do loop. Okay. The higher the quality of my foresight and action.

Steve F:

the do loop, not the doom loop. I just

John:

the D. O. Yeah, the DO loop.

Steve F:

O loop.

John:

Yep. And we actually coined that term in I don't know, six years ago, off of John Boyd's OODA loop, if you know what that is. Those of you in the military community, Observe, Orient, Act, Review. Observe, orient, decide, act, OODA, and Boids. And then, of course, there's there's the predictive processing loop in psychology. There's the design thinking loop. There's the agile development loop. There's the action learning loop. There's the scientific method. Observe, hypothesize, experiment, and look at your results. These are all these four step loops. The Deming quality loop, plan, do, check, adjust, all of these loops happening at different speeds are just do loops. And so we realize, okay, that's a fundamental model we have to get across to people in this book. And another one we got across is to your question, what are the fundamental processes of foresight? Toffler got them in Future Shock at the very in the last third of the book, he said There's a science of foresight that's emerging. There's an art of foresight that'll always be there. So the science emerges from the predictable aspects of the universe. The art of foresight is all of the creative ways we deal with the unpredictable and uncertain. And then sitting on the top is a fusion of art and science, is the preferable future, the politics, the strategy. All the fights that we have over what we want. And so this was his three P's model. And then Roy Amara at the Institute for the future, really famous futurist. He's, he coined Amara's law, which is, we under predict for exponential processes. We under predict in the longterm, we overpredict in the short term. We think everything's coming right away. So this is also called, we mistake a clear view for a short distance.

Steve F:

Era, I remember back in 1998, Palm Pilots were out, the era of mobile is coming and it came for a good, I don't know, 10 years, 20 or 50. It was always coming tomorrow and

John:

Always coming. Yes. And then, if you want, so I quote about 500 books in my book. My book is for people who like to skim books and they have a four page handout in here that helps you figure out how to helps you with the habit of skimming a book a week. If you'd like to do that and creating a little summary on the inside jacket of the things that were most cool to you, build your own personal index, and then teach that to somebody else in a five minute ignite presentation. And if you and you have people on your team doing this once a week, and then you have your brown bag Fridays where everybody's giving you a pitch on what they learned in the time it takes to watch a movie. That's the time that we call it interval reading or sprint reading. So if you sprint through a great book and you create this subset of things that really interested you, and you go right to the index and circle 10 things you've never seen before, and you go right in and you read those. That's And you any one of those that looks interesting, you created a little index and then you pitch it to others. Suddenly you have what's called GIGO, which is, we may all remember if we coded in college, like I did garbage in garbage out, but it's the flip of that is great inputs. give great outputs. And so the psychology of foresight tells us that within 48 hours after we read some high quality thing, our thinking quality is actually elevated because all of that information is not in long term memory. It's still in what's called the cram window, which is the short term memory. That's why we cram for tests and we totally outperform if we don't right. Anything you have seen, experienced, thought about in a 48 hour window sits in an area of your brain called the hip campus. And it has so much more capacity to hold interesting things. And then while you're sleeping at night, you write a subset of that to your cortex, to long term memory. But only a subset. And you have GIGO. If you're watching something, talking to someone, skimming something that's super high quality, once a week, for the next two days afterwards, your thinking is so elevated. And what are the key books? That you want to think about because books are the most complex single information structure that we share with each other today. Yes, articles are great. Podcasts are wonderful. Books are like the top of that pinnacle. If you can, like your startup equation, if I can skim that book, I will get so many frames in my head for critical things, critical models, critical systems I have to balance and if I'm actually going to create some output with that then suddenly the quality is that much higher. And so that's. I realized how important that LFAR loop was, the do loop, right? Learning Foresight Action Review. And so in my appendix of this book, I've got a Foresight Skills Journal that applies about six of these fundamental models that you can read through and decide, yeah, is this, is Am I using this on my team? Where am I strong? Where am I weak? I put it up on my website, at foresightu. com slash pubs.

Steve F:

And we'll put that link in the show notes too.

John:

thank you, . Yeah, it's the first it's the first thing on that page force that you dot com slash pubs. Skim it, skim at the very least skim the first chapter, which is the executive summary and this workbook at the end, this skills journal and ask yourself, Hey, am I using these tools? Because now we're gonna get to tofflers model, which is, I think, the second most fundamental model after this loop that we all use.

Steve F:

you hit Toffler, the one thing I want to interject is you talked about sprint reading. The Startup Equation, for those of you, it's a book that Janay, Duane, and I wrote. Dr. Dwayne and I wrote eight years ago on startups. We're going to redo a second edition, but I use what's called retrieval reading because it's designed to basically, if you want to build a company, but it's got all the sections and pieces, if you're doing a team culture, you can go through it. But if you want to be able to pull that out, I like the sprint reading concept because you could sprint through it, but there's also, I call it retrieval reading. So you want to go back and pull something out versus a larger, like a biography, things that are just. Meant to be consumed in a linear fashion. You want to be able to go back, right? So yeah,

John:

trying to actually build a mental model, right? When we're reading. That's the most valuable long term. What is the model we're building and how conscious are we of building it? So key. Yeah. So I, like one of the hacks I mentioned is, After you finish building your mental model, your index on the inside front cover put the book in a pile to be shelved. Don't shelve it yet. Put it in a pile and then at least a week later, let it age fine wine, right? And then a week later, pick it up. Look at that index. And decide which pieces of those ideas, things I might do, people I might talk to things I want to research, people I want to look up on the web. I've got these eight codes that you can put on the inside of your jacket, right? Look at those a week later, because only half of them now are you still going to want to do. Because it's gotten out of your short term memory. And now it's more related to what are your longer term goals? Passions, goals, projects, and then take a subset of those, put them in your, your to do list, your journal, sorry your day planner, and then shelve the book. And then if it calls to you when you're sleeping, that's the way I present it, you're going to need to pull that book down, read it again, go through it, maybe even read it old school, cover to cover at that point. But what I found is really only a small subset of these amazing books. Do I really have time or the interest to do that with? Mostly what I want to do is get Geigel. I want to get high quality summaries and then I get better when I'm skimming a book, I read the table of contents. I can read the first and last page of some of those chapters and say, Oh, this is the chapter I have to read the old school and the other ones I can let go.

Steve F:

When you think about this book, and there's other books out there in the field, we talked about the motivation, the L4 loop you have a four horizons model, but what I want to do is relate to four horizons around getting people excited about, foresight in an organization, getting people to read it's one thing to have a book club and do a sprint read, but like bringing foresight into an organization, you and I talked about this prior to this, before the start of recording, as I'm big on organizational foresight and how do you create futures literate people, futures fluent that it becomes part of culture, right? How do you get personally for you? How do you get people excited about it? And let's relate that to the roles So let me start this let me so I was sharing this so you have the product officers the innovation officers the strategy officers teams When I explain to people where this fits is the product officers have a horizon of about 18 months. They look at the product, they look at what's coming, the roadmap, the customer feedback, the things they should do to make the product successful because it's what makes the company money. Beyond that, there's the innovation officer who is managing a portfolio of investments, just like a venture capitalist. They're making investments in new ideas, whether they be core or adjacent, transformational, tangential, whatever terms you want to use, they do eventually feed in and become product teams if they're successful, or they become new businesses, spinoffs, new ventures, and so forth. That's about a three year horizon. Then there's the strategist, which kind of goes along the whole band, but they have to look out farther. And that's where foresight and futures work comes in, which has always been surprising to me. That it became not an acceptable practice in many places because it's not. Strategy is more almost reactionary and almost forecasting. And we talked about. If we can the kind of, we'll call it the split, the camps, the forecasting camp and the Futures Camps, when you meet with clients, like how do you talk about the use of foresight in an org with that kind of grounding and how do you get them excited?

John:

Yeah. There is this is really a good place to actually present some of the most interesting work from the psychology of foresight. Gabrielle Oettingen starting in 2010 at NYU and I forget the other university in Berkeley or in in Germany. She started working with students and then she worked with lay people. And she did these randomized clinical trials, the gold standard in research. And she had people think optimistically about the future and then make a plan. And then she observed their action and reviewed what they got done and their prediction of what they would get done. And then she had them think negatively about the future, pessimistically, and then make a plan and then act. And she reviewed that. And then she had people think optimistically and then pessimistically and then make a plan. And then she called reverse contrasting. Think about all the risks and then think about the positive goals and then make a plan. She measured three things. She measured how much they got done, how accurate their prediction was. of what they would get done and their motivation to overcome an obstacle that they had not anticipated. And what she found, and this is in her book, Rethinking Positive Thinking which is mentioned in my book and the title basically means don't just be a positive thinker. Be a positive and a negative thinker in the right ratio and then make a plan. That was her big insight, right? So positive psychology is missing. It really should be called positive negative psychology because what she discovered was if you go negative first and then positive and then make a plan, 50 percent less of what you're going to get done, what's actually going to happen, and you get 50 percent less done. And this is over any time horizon. This is over your, this is over your daily worker's horizon. This is over your product manager's horizon, this is over your innovation officer, your strategist's horizon, over any horizon. She started with things like how many SAT words or how much reading can a high school kid get done prior, where they do this visualization, positive, negative, plan. And in the positive, they're thinking of that, oh boy. They're actually imagining, I got it done, what are the benefits, what are the advantages I have of getting that done, what are the new freedoms I have, the new capabilities. Then, they have to spend the same amount of time thinking of all the ways they're going to predictably fail based on past behavior, based on past known aspects of themselves. And then they make a plan. It has at least one if then statement. If this problem comes up, then I'm going to do this. I'm going to call in a friend. I'm going to take a short break and make sure it's no longer than five minutes so I get right back to what I have to do, whatever it is. And that if then statement helps with motivation, even for unanticipated obstacles. And so what we learned, what we realized when we read her stuff was that's part of this devo pyramid. It's actually an evo devo trapezoid. If you remember those in high school where you had to do a trapezoid is like a pyramid with the top chopped off. So at the bottom, what you have when you're assessing the environment, and this is for yourself, for your team, for your whole organization, it's Over any time horizon, you're going to have a conversation, you're actually going to have a a conflict in your head between relevant knowns and relevant unknowns. Between developmental predictability and evolutionary unpredictability. Knowns and unknowns. You're going to have that fight. What's relevant to my environment? Then you're going to move up to strategy. And this is where we all care. We don't care about the environmental assessment in most of the schools of thought and foresight. But I'm arguing, it's really wherever, it's the grounding. And you've done that homework. Then you move up to, what's my optimistic vision? What's my defensive threat vision? And we have to balance those two in strategy. In all of our great strategies. is optimistic possibilities. It's called strategic optimism in the psych literature. And then what are the things that could take that vision down? That's called defensive pessimism in the psych literature. It's not explanatory pessimism, which is, whole world's going to crap. It's defensive pessimism. You're only being pessimistic around in relation to the vision. So you have to force yourself to create a shared vision. If you're on a team or an individual personal vision, If you're thinking about what I'm going to get done today or in my time box. And then she says, you got to fight yourself with the negative thinking right after it, and think a little bit of what might not work. Then make your if then statement with if this problem, then this. And then she also argues, and her husband argues this great motivational psychologist. We're giving you the name right now. Key resources. What are the key resources that are going to help me? So not just if then what are my key resources I want to make sure I get this done and what she found in all these randomized clinical trials is you get 30 to 130 to 120 percent more predictive accuracy, which are going to get done today, this week, next quarter. And 50 to 150% more output. And then if you get, if you hit an unanticipated obstacle and her measurement was more squishy on this one, you persist longer, your motivation is higher to figure out a solution.'cause you've already anticipated one type of sol problem. And so you get this prediction, productivity, motivation. You get this trifecta. Because you've had the strategic conflict at the top, we call it sentiment contrasting. And you've had this predictive conflicting at the bottom. And this gets to one of the big issues in our field. Our field, historically, 50s, started with a balance between predictive and unknowns. Exploring unknowns. wrestling with uncertainty and predictive trends. You may recall this, but the start of professional foresight was the funding of Rand Corporation

Steve F:

Yeah, Hermann Kahn.

John:

In 47. These think tanks that did huge trend analysis. So this gets to again, What? What does our strategy team want to do? It wants to do on the predictive side, trend analysis, probabilistic models, prediction markets causal models sentiment, contrast or sentiment mapping. What are the experts say? We can predict regardless of strategy. So we do all that on the probable side. Then we are actually equipped to do all of this. We're anchored. We're equipped with that expected. It's often called the expected future because it also includes like what top management thinks is going to happen, which may be a fantasy, but we have to include that because we can predict they're going to act on their expected future, even if it's wrong. So we have to learn how to guide them away from that, right? So we have that. We've done that work. Then we do possible side. All the stuff that our field really has been about it. In since the seventies and eighties, which is scenarios.

Steve F:

before you jump there is for those like listening, it's, it was a really good clarification is that strategists. Go with the no knowns, the, all the things that they do extrapolate out the unknown, COVID is a good example of this, a pandemic. How many did simulations for how they would reorganize it? We all saw that. And we don't have to recount that, but it was a disaster in many cases. And

John:

It was,

Steve F:

and some haven't learned, but think about it.

John:

and some didn't exactly right. Yeah.

Steve F:

The strategy organization in its function is not futures thinking and this leads back to you were just about to get into Where the fields went and where things go to how do we what I want to get to is how do we bring that back? How do we get back to that because think about herman khan? There's not theirs were yes, there could be nuclear war, but there's a lot of unknowns What happens if this what happens if that they were dealing with both and that grounded

John:

balanced way.

Steve F:

in a balanced way, right? so

John:

And the International Institute of Forecasters, IIF split off from world futures society in the seventies might've been late seventies. Because the Futurists were not recognizing this pyramid. They weren't recognizing Toffler's Pyramid. And even though Toffler and Amara had told that to them. To be fair, Amara didn't tell them the three Ps were this universal way of thinking until 81, in this seminal article in the Futurist. But Toffler said it in 70, in Future Shock. People had a lot of hints that you have to balance the knowns and the unknowns. Before you have the strategic conversation and what you can do and then when you're doing that, when you're doing the strategic conversation, you have to split them into the visions and the risks. So we call this the cover model, KUVR, knowns and unknowns at the bottom, environmental assessment, and then visions and risks at the top, where the optimists and the pessimists have to thrash it out. And so you have to, cover your basis, KUVR. You got to cover your basis in the process of creating great strategy. And one of the most obvious things, and this is in my appendix, is you know that different people like different corners of that, of those two conflicts. Certain people just love different corners. And now we have the forecasters and the science folks, and the a lot of the statisticians and the sociologists and stuff and the psychologists, they're on the knowns side. Then we have a lot of the creative and design thinking and alternative futures, right? All on the evolutionary side. And then the mix of those two, the evo devo on the top of this pyramid is going to again break into the people who want to start with visions, the people who want to start with risks. And if you, one of the things that was discovered is a very powerful thing. You ask in any organization, any strategy organization, how many of you start with things that are going to bite you, the things you need to fix in your thinking about the future and how many start with the opportunities. And how I want to get them. And it's almost a 50 50 split in most organizations. Now in some, like in Defense, which is one of my primary clients the defensive thinkers are gonna outnumber the opportunistic thinkers like two to one. Minimum in law enforcement. Same thing, probably in any what's called an HRO, high reliability organization. So this would be air traffic control. This would be, space exploration, anything where, the cost of error is really high. And so a lot of their problems is they don't do enough creative exploration. They do enough experimentation. They don't have enough innovation because they really have to protect. So we can rename that pyramid of the three Ps. into the three mandates. Three mandates are, we want to create, that's evolution, we want to protect, that's development, and we want to adapt, which, and that's evo devo. The mix, adaptation is always a mix of creation and protection. Unpredictable, predictable. We have to do all three of those, and then the last thing that really helps us understand that pyramid is this three actors. There's three actors in all complex systems. There's individuals, there's groups. And there's networks, or ecosystems. So individuals are the primary creative engines in a society. They're constantly experimenting, they're hugely diverse. Groups are the primary protectors. A group, by definition, is a collection of individuals who've agreed on a certain set of norms. And they're protecting those norms. That's what a group is. But sitting on top of individuals and groups, whether the group's an organization, or a state, or a team inside of an organization, It's the collection of individuals and groups that are diverse and different. And those, the great phrase that we use in strategy is co opetition. Every strategist who knows this term, co opetitive or co opetition, knows that human beings start first looking for implicit cooperation. Create a set of rules that we can compete heck within. And then we want to update those rules in an iterative fashion. So the world is not just competitive first or just cooperative first. It's not just a conservative and a liberal vision, because, as we all know, conservative thinkers start with thinking about protection and cooperation on that pyramid first. And the liberal thinking, thinker starts with creation and cooperation first. And as John Haidt describes in his book Righteous Mind, those are all value specialists that are so useful. We're talking now about normative foresight, which we have to talk about when we talk about strategy and policy, right? And politics, the fight, as Toffler said at the top, the fight between all the different groups. What happens is, We think cooperatively and we think competitively. Both of those network perspectives are fundamentally useful, but the critical insight is the networks that win are the ones that find a set of agreed upon rules and they compete within those rules and they update them over time. And that's sports that's everything interesting. It's democracy, it's sports, it's capitalism. It's morality. All four of those, we can think about, Oh yeah, those are all cooperative systems. We're constantly using both of those perspectives.

Steve F:

And I look at normative foresight when I explain, describe it to people and explain it is it's essentially the base of all types of foresight work. People use different tools, they can use different approach and they could find the things that help them support their work Irrespective of the theory the thing that it grounds them whereas more I would call it's we'll call it focused or specific or a special force specialized is where someone has one Theory, and that's what they bind themself to a good example of that is Ray Kurzweil Everything's about the singularity. It's that's it. Like you can't because he doesn't do normative foresight. He does everything revolves model

John:

that relates exactly to what you said. It's called the the fox hedgehog eagle model. So a guy named John stem camp wrote a book called time to lead. And in this book, he covers all these famous leaders from history, some of them recent history. And he says, Isaiah Berlin in the fifties said, you have your fox leader and you have your hedgehog. Now your hedgehog is a developmental thinker. They're all about prediction and protection. And so they have one overarching theory and a model and they apply everything to it. Kurzweil, perfect example, then you have another leader who is a fox thinker. They're evolutionary in thinking. So creation, right? Experimentation and creation are their fundamental drive rather than prediction and protection. So what does a fox look like? They're super tactically agile. You can't predict what that leader is going to do next in a situation because they have so many tool sets. And in my discussions with you, I think of you very strong in fox thinking. And in hedgehog, but I think you're slightly more Fox. You see so many possibilities. And so you would be in steam camps model. You'd be this kind of great Fox leader. And he says, yes, you can be a great Fox. You can be a great hedgehog. And then the synthesis of those two, he says is the Eagle, which is trying for the big vision and tactical agility. And they sit, they look for that 64, 000 foot view. And then he named some great Eagle leaders as well. And. I'm trying to be an eagle. You're trying to be an eagle. And in many circumstances, context, we're out eagling a lot of people, but not in all. And but then I guess most fundamentally, his argument is you could be a great leader in any of those long as you recognize you need them all. You need All three of those types. And then he has a fourth type called the ostrich, which is lovely we all know leaders like that. It just stick their heads in the sand and you cannot predict what they're going to do when some new change comes up. They're so unforesighted and unpredictable.

Steve F:

That's a great, that's a great example of, okay, say you're a person. So for those listening, if you're the futures team of one, or they won't even acknowledge or the futures team of one unto yourself, if you will, is how do you evangelize that and get it past the strategists who don't want to talk to you because you scare them? Or the CEO who just probably thinks they're going to be there for another year and a half or two years and they're going to get their exit package and they're just let marketing write vision statements. How do you help those?

John:

yeah, I'm going to give you two pitches. first is the one I learned first, which is you focus on your strategists and you say you guys do any foresight? And typically you're going to, huh? Do you guys know what strategic foresight is? This is this term, where we look ahead and then we do strategy the way I just described that pyramid start at the bottom, move to the top, have the visions and risk conflict. And the elevator pitch to those folks is foresight is anything you do before strategy That's that's the big banner quote the frigging big banner quote. And this has such

Steve F:

everyone thinks it's after everyone thinks it's

John:

everyone thinks it's after. And so what it really is what you often get is these Wait a minute. So you ask them what do you do? Do you do any trends, any intelligence, any surveys of the current landscape to get as many knowns, relevant knowns and constraints and predictables as you can? And then do you do any exploration tools? Do you use scenarios? Do you use alternative featuring? Do you use cross impact? What do you use? Do you use anything to design thinking? Do you do any exploration? And then once you've got that, do you, how do you vision and do you break your preferable futures at the top of the pyramid. Do you break them into the preferred and the preventable, right? The four P's,

Steve F:

The preventable. Nobody really brings that one usually to bear. It's usually the future happens, it's going to be whatever, but to your point, it's You know I get very frustrated with many frameworks of futures that are basic. Archetypes are not bad. They're just one approach to it, but it's like a singular future. I'm very much a plural verse type of person. And when you have collapse or transformation, usually , in order to get to transformation, there's usually a collapse,

John:

There you

Steve F:

but it's how you ride that out. It's how you move through that. So to your point about the. What did you say? The, not the

John:

preferred and preventable.

Steve F:

preventable. Yes. It's almost like you're not preventable, but the preventable outcome of how you have to ride that out. Like, how do you avoid that situation?

John:

Yeah. Cause in preventable is this idea that

Steve F:

happened and all of our stores were closed for

John:

you're looking for traps. Yeah. When preventable, you're looking for risk threats and traps and just an example of how to do it well, Ray Dalio, the very beginning of the COVID. thing. He looked to history. He used Elfar learning, which is past and present. Before you have the privilege to do foresight, you have to look to past and present. He looked past and present, and he found the, great books on the Spanish flu epidemic of 1919 in the U. S. And he gave a nice short book on that every single one of his strategists. And he said, Read this. And tell me how similar this is going to play out now 110 years later or 100 some odd years later. And what a fantastic way to set up Foresight to find a benchmark example. And of course, they were early into Zoom and these other. Obvious winning, the biosurveillance diagnostics the vaccine companies before all those guys popped, they were into them early because they'd done this homework in the do loop. They'd looked for past examples they were, and they, but that requires you to recognize that the bottom of this pyramid is that there's predictables, there's cycles, there's trends. There's models that are going to statistically probabilistically be predictable, and then you do your exploration of your unknowns on the possible side, then you're ready for strategy and then you have to fight between visions and risks because people, different people are going to like each of those four corners of the pyramid. There's a great example in the cursie diagnostic. You can have your whole team do which is a version of the Myers Briggs, but it actually. Breaks the Myers Briggs into these four types, and in my book I show they actually fit right onto this trapezoid perfectly. At the bottom you have your guardians and your artisans, right? The protectors and the creators. And some of us are artisans first, some of us are guardians first. And then at the top, you've got your idealists and your rationalists. Idealists are vision oriented, rationalists are oriented to what needs fixing. And so both of those, all four of those actually are in conflicts. But as Kiersey says, it's the that we all have all four of those in our personality. So this is actually a universal thing. We have these fights with ourselves. And if we have them in the right balance, that's my big, that's my big takeaway that I want to recommend. You can get people to say, oh, wow, this is awesome. I need to do foresight prior to strategy, even though as I've described it to you, strategy is the last fight. We typically think of the bottom of that pyramid, knowns and unknowns as foresight, actually all four of those, because great foresight always ends with great strategy. So you can tell the strategists, strategic foresight is anything you do prior to strategy. If you're not doing anything, there's a 60 year tradition of all these cool tools you can use. We haven't talked about all the tools on the visioning side and on the risk side, all the risk analysis tools, all the threat assessment tools on the threat side, all the aspirational tools, all the shared visioning tools on the preferred side. So you can use tools and methods in all four of those corners of that rhombus. But the second great way to convince people that this is valuable is you talk to your Chiros. Don't talk to your strategist, talk to your CHIROs, your Chief Human Resources Officers or People Ops, they're often called. And say, tell me about your L& D. What do you do for L& D? What kind of training, learning, development do you do in your organization? Do you do anything that gets them to struggle with the future? Oh, you don't even do design thinking? Whoa, that's really interesting. If we do a little in service on these tools, how do you think people are going to get better at personally planning their week? At personally managing the conflicts that they have, because I didn't say this, but in my book I have this evidence that was, has been collected on conflict. Remember we were talking about coopetition? All the great networks are finding, and they compete heck. How do you manage conflict around those four corners? Between your guardians, your artisans, your, rationalists. How do you manage that conflict? There's all this evidence, Amy Edmondson at Harvard, her book The Fearless Organization. Psychological safety promoted by the leaders to create trusted conflict in the strategy room. It's also called fair fighting. We recognize we actually need to have conflict. We're not going to find the best answer unless we're actually having trusted, usually confidential, right? And usually the leader has to model it that with that ego perspective, ideally, but they don't have to, but they have to model it. Then you get this, what Harold and Yuri showed in the sixties. is not only do you get more conflict, you get more productivity. There is a peak to that. There's a peak where you do too much conflict and their studies showed that, too much conflict degrades performance, but if you want higher quality strategy, higher quality action, you want to take your current level of conflict and scale it higher. But they have to be the right conflicts. What are they? I would argue the cover conflicts, these four assessments, and this is Art Shostak's work, the four Ps, by the way. He took, he updated Toffler's three Ps in 2000 and said if you think about the preferable future, you have to split them into preferred and preventable. That the protopias and the dystopia, and you have to recognize, last thing I want to say about bias is in our foresight community. We're biased to think about possible futures, typically, in our futurist community. We don't think enough about probable, like we used to balance those two. And at the top, our culture and our media biases us to think about the dystopias and the preventable futures. But, if we don't start with a shared vision, if we don't actually create that shared vision, and then think about the negatives around that vision, what we end up doing is going into all these rabbit holes, of stuff that could happen that's low probability and is not related to our vision. You get a lot less done and you see a lot less of the opportunity to use audience term if you start defensive. If you start dystopia, so you have to balance protopia dystopia.

Steve F:

I think we're always aspiring I don't think there's ever a finished stable or all it's like yoga practice. We're always practicing. It's always practice It's never, you know ever finished. So it's advice to the practicing or the aspiring, futurist what would you offer someone just getting started in this? It's a lot. This is a lot to intake It's an exciting space, but it's a lot to start with where would they

John:

This is where I would start with, network first, think about the network first. Don't think about yourself. Don't think about the group that you grew up in or embedded in. Think about the network of individuals and groups, many of which are different from you. And think about the cooperating and the competing opportunities that you have within that network. And so most obvious is a practice network. And fortunately, WFS disappeared in 2015. Not enough financial foresight. That was their big problem. So they lost, they didn't, they never built an endowment. And so they ran into a rough patch and they disappeared. But then the APF, the Association of Professional Futurists, Emerged toward the end of their tenure actually emerged in 2003, I think. And then WFS disappeared in 2015, but now association of professional futurists is a thriving network and community. So 550 of us practicing foresighters and futurists, you may think of yourself as a futurist trading stories about the future and critiquing them. Like that Weeble story thing we mentioned, or you may think of yourself as what we call a foresighter. Someone who practices certain methods, you're really good at certain foresight methods that you use, prior to strategy, setting up to strategy. So you're a trend, you're a, you're an innovation scout, you're a forecaster, you're a trend monitor. You're a prediction markets manager, you're a whatever a scenario producer. You're practicing foresight. You're using a tool, a recipe, a framework, and a set of models from behind that. And so you've got your foresight community and you've got your futures community, and they're both in this future space. Now there's a big. Wonderful network. You mentioned the Dubai Future Foundation, which is a very recent emergence of trying to get a global perspective on foresight, stepping into WFS's shoes there, World Future Societies, but also APF is totally global too. So I would say, The bottom line is, hey, join one of these networks and help them out. I'm part of a team, that has launched a program within APF called Futurepedia. We're going to build a wiki for the world, hopefully, that looks ahead to the future. On future of X. So not only does it have bios of futurists past and present, nice summaries of foresight methods, case examples of foresight that's been used and poorly. We're also going to include future of X, right? Topic pages where you get to see the schools of thought, the current discussions around the future of work, the future of basic income guarantees, the future of anything you're interested in. You'll see what the dialogue is in these four P's that we described, starting with the knowns and the unknowns and the visions and the risks, right? And so we're building this right now at APF and I'm so honored to be part of a team. It was Zan Chandler started this at APF. She said, we need a futures encyclopedia, futures wiki. And those of you who use Wikipedia may know you can't put a page on Wikipedia that is future of X. They consider it speculation. So we need a Encyclopedia of Speculation on what may be coming. And so this is just one of several great,

Steve F:

so an encyclopedia, Galactica of futures work,

John:

it's Encyclopedia of Foresight. Yeah it is a Wikipedia is wonderful past and present. We also need Wikipedia of the future. And, I'm not saying ours is necessarily going to be the one that goes global in all languages, but I am saying there's 550 futurists and foresighters at APF who are going to start putting stuff up that's useful to them in their practice. And hopefully once we get to X thousand pages, everybody else will be doing the same and it'll be this open forum. Conversation, continual conversation that we have about the future. We can use these generative AI tools, which we haven't talked about yet, to

Steve F:

And that's for another episode. We're going to get into that in another, on another

John:

create all these stubs, right? But in the end, we need to edit those stubs and make sure they properly reflect the incredible value of foresight in organizations. And by AIs to better understand this whole field. By creating these great pages, by creating these resources, because these AIs, I would argue, and this is, a future episode, my Natural Alignment series on Substack, Natural Alignment, is that they're actually becoming versions of us. That I think evolutionary and developmental Algorithms in biology are the way we manage complexity in living systems and they are going to be forced into using those systems, becoming more like us, having attention, having generative adversarial processes, having processes that simulate our neurotransmitters, having emotion, having world and self models, having empathy and ethics. I would argue that is so valuable for creating alignment in living systems. They're gonna have to have a deep version of that, which means, short story is the digital version of Stephen. You're gonna feel like it's you. It's gonna start completing your sentences when you have a senior moment. It's gonna be something you leave behind for your kids when you pass on. It's gonna be this super helpful thing that's learning like a kid its entire life. And we're now just now on the edge of that, what we call to use your age model, age of AI. We're on the edge of that. So a lot of bright possibilities and risks, but a lot of bright possible possible and preferable and preventable futures in there.

Steve F:

It really makes me excited and scared that there's an adult ADHD digital twin of me. Somewhere. But when you look at agents and functioning and what that way, it's specific narrow bands of that. And I've always said on these podcasts and in other venues is that, I was very anti AI for a very long time, especially as an artist and what it was doing to steal artists work. And the more I've used the tools and I look at it as a collaborative partner. The question is that, a designer is never going to get replaced until they have empathy. Now, question is. If it's a clone of us with that empathy, that's there, it becomes a, a broader way of us thinking that versus some other foreign agent being the designer or being the thought process and that. So

John:

It's mostly bottom up. It's mostly in living systems. It's mostly bottom up that creates all the innovation and diversity. And so the personal A. I. s, the pies, the A. I. s that are coming, they're going to have way more power and capability than, as a network, than the top down A. I. s that the corporations are currently using in the first gen, when the tools are not cheap enough to democratize, when your cell phone doesn't understand you, doesn't have a private data model that you're training that's more useful than the one the marketers are using top down, To market target you when that finally happens, when you have that thing, that's helping care, caring for your values and your interests and how you vote, how you spend your money, who you connect with what you read when you have a personal AI that has all of those dimensionalities. That you feel is a tight connection to you and that data model is private, just like your texts your photos and your email. That's a level of, I would call it network empowerment, bottom up empowerment that will help revitalize the democratic network. And, the lowercase d we're talking about here, in in Western cultures. What's interesting, I'm just gonna leave your viewers to think about this. Okay.

Steve F:

Yeah. Yeah.

John:

and other autocracies, how are they going to use those tools? They're going to be way more restrictive than the ones we use. And I think really, it's going to really show this open network approach is the future. Networks are always winning. They're the super adapters. Individuals win. Sometimes groups win. Networks are always winning. Life itself. is an immortal network that has never died, has always created more diversity and more intelligence at the top, both at the same time, because We have these information, genetic sensor simulation networks underlying that whole system, protecting it. That's the really crazy thing about life, right? 10 percent of our genome is retroviral insertion sequences. Viruses are jumping between us, creating 10 percent of our genetic diversity. We are a network, more fundamentally than we are an individual or even a group. And that's the future, I would argue, of complexity science, is recognizing that it turns into network science and we start to understand All three of those corners, right? The individual, the group and the network have to be center of our strategy.

Steve F:

So you've contributed a great deal to the field and it's been amazing to watch. How would you want, when you're looking back, how do you want your work to be remembered? What impact do you hope to have?

John:

I just I hope people see that kind of foresight is a superpower. It's built into all living systems and the more conscious we get with it and the more we can port it to these digital versions of ourselves that are going to be so much more complex so much faster, but also They're going to be human just like we are they're going to be human the first human Acts picking up a was picking up a rock All I would argue that and so would several other anthropologists Not just picking up a rock though Picking up a sharp rock using our foresight to realize that could protect us against all these powerful things that were eating us like leopards But not just if we held it every other human had to have it. We found these early flaked rock quarries where there's a thousand of them We were mass producing flake rocks from the very beginning of technology use to share as a group, as a network. And so I would argue the really amazing thing is that this human future, I don't like this word post human, because I think the AIs are going to become human. I think we're going to become more human. Like as Bucky Fuller said, humans, not a noun, humans a verb. So we're constantly changing, growing. How do we do it? We do it with foresight. We do it with ethics and empathy. We do it pro social activities. We do it as a network. And if we recognize that, maybe we take a lot of the stress about the future where I have to rescue the world. No, the network's taking care of itself, man. It's really improving. You look at all these adaptations to climate change, right? This sustainability era we're going into, we have all these powerful tools. To visualize and create a better future. My legacy, I would hope, Stephen, is to say, look, nature has so much more wisdom. It's so much deeper at balancing these critical processes than we give it credit for. And if we really want to understand the future of society and technology and strategy, we've got to keep looking to these complex systems that have been so good. And we have to see these protecting networks, immune systems, circulatory systems thinking systems, social knowledge networks competitive networks inside of, digital platforms. And we see those networks, think we can adapt much better. And that's the big insight for me. The book I want to recommend your listeners, if you want to know the complex systems. Systems theory around this is called The Romance of Reality by Bobby Azarian. So my EvoDevo Institute we like to recommend books on, how do you apply these evolutionary and developmental ways of thinking. This is the first book, it just came out last year, The Romance of Reality, and it looks at life and physics and chemistry and biology and society from this EvoDevo and network centric perspective. And it talks about this idea that there are these universals that we're just getting smarter at seeing that help us understand what the critical things are. And so that, that would, I hope be my legacy is helping people realize that nature's figured it out first. And we just get smarter at studying that and then applying it in our teams and our own lives. And we're just going to get so much higher quality. Foresight and action. It truly is a superpower to make this unconscious thing that we do conscious.

Steve F:

That's a great way to close this and for those who want to find you, there's many places we'll put in the show notes, but somebody wanted to listen to your words and check out your writing. Like where would they reach out to you?

John:

I just recommend again that book. It's a good place to start. Foresightu. com slash pubs. I am on Twitter and I'm on Substack. But would recommend that as the start. And then I'd come back again to this, join a network, volunteer at the APF. It's all, it's almost all volunteer organization, those of us. And it's pretty strong now, every time we get another member, so much more diversity of opinion, so much more insight. Join a practice network and, pay your dues, not very expensive. Find a topic, a project you want to work on and I'll be there, man. I'll be there. I'd love to chat with you. About how we apply these, how we apply foresight and recognize that it's a superpower.

Steve F:

Great. John, thanks for your time today, and it's been an incredible conversation, look forward to having you back on again. And for those listeners out there, this is one of the best futurists in the field. Take a knee and take a listen from a great futurist. So thanks again, John.

John:

Thank you, brother. Really appreciate it. Thank you. Thank you. Listeners.

People on this episode