Think Forward: Conversations with Futurists, Innovators and Big Thinkers

Think Forward Ep 114 - Future of Systems with John Smart Part 2

• Steve Fisher • Season 1 • Episode 114

🎙️ Welcome to Think Forward Show Episode 114: The Future of Systems Part 2 featuring John Smart 🌍🔧

In this exciting follow-up, I continue my conversation with renowned futurist John Smart, diving even deeper into systems thinking and evolutionary development. We cover:

🔧 The three mandates: creation, protection, and adaptation

đź“Š How networks are the ultimate winners in complex systems

đź’ˇ Key insights on foresight, leadership, and how to manage conflict in strategy

🚀 The interplay of individuals, groups, and networks in driving systemic change

John’s incredible foresight and practical advice will equip you with the tools to navigate the complexity of today’s interconnected world. Don’t miss this insightful conclusion to our two-part series.

🎧 Listen Now: www.thinkforwardshow.com

How are you applying systems thinking in your organization? Share with us! #ThinkForwardShow #Foresight #SystemsThinking #Innovation #StrategicForesight

🔗 Steve’s Site: www.stevenfisher.io

đź”— Episode List (Lite Version): https://lnkd.in/eAVcg6X4

🎧 Listen here: 

Think Forward Show (Light Version): https://lnkd.in/eVBVJRCB
Think Forward Show: www.thinkforwardshow.com
 
Thank you for joining me on this ongoing journey into the future. Until next time, stay curious, and always think forward.

🎧 Listen Now On:

Apple Podcasts: https://podcasts.apple.com/us/podcast/think-forward-conversations-with-futurists-innovators-and-big-thinkers/id1736144515

Spotify: https://open.spotify.com/show/0IOn8PZCMMC04uixlATqoO

đź”— Web: https://lnkd.in/eAVcg6X4

Think Forward Show (Light Version): https://lnkd.in/eVBVJRCB

Think Forward Show: www.thinkforwardshow.com

🔗 Steve’s Site: www.stevenfisher.io

Thank you for joining me on this ongoing journey into the future. Until next time, stay curious, and always think forward.

So we can rename that pyramid of the three P's into the three mandates. Three mandates are, we want to create, that's evolution. We want to protect, that's development. And we want to adapt, and that's evo devo. The mix, adaptation is always a mix of creation and protection. Unpredictable, predictable. We have to do all three of those. And then the last thing that really helps us understand that pyramid is this three actors. There's three actors in all complex systems. There's individuals. There's groups, and there's networks, or ecosystems. So individuals are the primary creative engines in a society. They're constantly experimenting, they're hugely diverse. Groups are the primary protectors. A group, by definition, is a collection of individuals who have agreed on a certain set of norms. And they're protecting those norms. That's what a group is. But sitting on top of individuals and groups, whether the group's an organization, or a state, or a team inside of an organization, It's the collection of individuals and groups that are diverse and different. And those, the great phrase that we use in strategy is co opetition. Every strategist who knows this term, co opetitive or co opetition, knows that human beings start first looking for implicit cooperation. Create a set of rules that we can compete heck within, and then we want to update those rules in an iterative fashion. So the world is not. It's not just a conservative and a liberal vision, because as we all know, conservative thinkers start with thinking about protection and cooperation on that pyramid first. And the liberal thinking, thinker starts with creation and cooperation first. And as John Haidt describes in his book, Righteous Mind, those are all values specialists that are so useful. We're talking now about normative foresight. Which we have to talk about when we talk about strategy and policy, right? And politics, the fight, as Toffler said at the top, the fight between all the different groups. What happens is we think cooperatively and we think competitively. Both of those network perspectives are fundamentally useful. But the critical insight is the networks that win are the ones that find a set of agreed upon rules and they compete within those rules and they update them over time. And that's sports. That's everything interesting, it's democracy, it's sports, it's capitalism, it's morality. All four of those we can think about, oh yeah, those are all co operative systems. We're constantly using both of those perspectives. And I look at normative foresight when I explain, describe it to people, I explain it, it is, it's essentially the base of all types of foresight work. People use different tools, they can use different approaches, they can find the things that help them support their work, irrespective of the theory, the thing that. It grounds them, whereas more, I would call it, we'll call it focused or specific or special force specialized is where someone has one theory and that's what they bind themselves to. A good example of that is Ray Kurzweil. Everything's about the singularity. Perfect. It's, that's it. Like, you can't, because he doesn't do normative foresight. He got everything revolves around that. Well, gee, well, it relates exactly to what you said. It's called the, the fox hedgehog eagle. So, a guy named John Stemkamp wrote a book called Time to Lead, and in this book he covers all these famous leaders from history. Some of the recent history. And he says, Isaiah Berlin in the fifties said, you have your Fox leader and you have your hedgehog leader. Now your hedgehog is a developmental thinker. They're all about prediction and protection. And so they have one overarching theory and then a model and they apply everything to it. Kurzweil, perfect example. Then you have another leader who is a Fox thinker. They're evolutionary in thinking. So creation, right? Experimentation and creation are their fundamental drive rather than particular prediction and protection. So what does a fox look like? They're super tactically agile. You can't predict what that leader is going to do next in a situation because they have so many tool fits. And in my discussions with you, I think of you very strong in fox and in hedgehog, but I think you're slightly more fox, you see so many possibilities. And so you would be in Steenkamp's model, you'd be this kind of great fox leader. And he says, yes, you can be a great fox. You can be a great hedgehog. And then the synthesis of those two, he says, is the eagle, which is trying for the big vision and tactical agility. And they sit, they look for that 64, 000 foot view. And then he named some great Eagle leaders as well. And I'm trying to be an Eagle. You're trying to be an Eagle. And in many circumstances, contexts, we're out Eagling a lot of people, but not in all. And, but then I guess most fundamentally, his argument is you could be a great leader in any of those long as you recognize you need them all. You need all three of those types. And then he has a fourth type called the ostrich, which is lovely. Okay. Because we all know leaders like that who just stick their heads in the sand. You cannot predict what they're going to do when some new change comes up. They're so unforesighted and unpredictable. That's a great, that's a great example of, okay, say you're a person. So for those listening, if you're the futures team of one, or they won't even acknowledge you're the futures team of one unto yourself, if you will, is how do you evangelize that and get it past, The strategists who don't want to talk to you because you scare them or the CEO who just probably thinks they're going to be there for another year and a half or two years and they're going to get their exit package and they're just let marketing write vision statements. How do you help those? Yeah, I'm going to give you two pitches. First is the one I learned first, which is you focus on your strategists and you say, you guys do any foresight? And typically you're going to, huh? Do you guys know what strategic foresight is? This is this term where we look ahead and then we do strategy. The way I just described that pyramid start at the bottom, move to the top, have divisions and risk conflict. And the elevator pitch to those folks is. Foresight is anything you do before strategy. That's, that's the big banner quote. The frigging big banner quote. And this has such power. Everyone thinks it's after. Everyone thinks it's. Everyone thinks it's after. Right. And so what it really is, what you often get is these, wait a minute. So you ask them, what do you do? Do you do any trends, any intelligence, any surveys of the current landscape? To get as many knowns, relevant knowns and constraints and predictables as you can. And then do you do any exploration tools? Do you use scenarios? Do you use alternative futuring? Do you use cross impact? What do you use? Do you use anything to design thinking? Do you do any exploration? And then once you've got that, do you, how do you vision? And do you break your preferable futures at the top of the pyramid? Do you break them into the preferred and the preventable, right? The four Ps. The preventable. Nobody really brings that one usually to bear. That's it. If the future happens, it's going to be whatever, but to your point, you know, I get very frustrated with many frameworks of futures that are basically, archetypes are not bad. They're just one approach to it, but it's like a singular future. I'm very much a plural verse type of person. And when you have collapse or transformation, usually in order to get to transformation, there's usually a collapse. That's it. But it's how you ride that out. It's how you move through that. To your point about the, what did you say? Not the probable. Preferred and preventable? Preventable, yes. It's almost like not preventable, but the preventable outcome of how you have to ride that out. Like how do you, Avoid that situation. Yeah, because in Preventable, it's like, what if a pandemic happened and all of our stores were closed for three months? You're looking for traps. Yeah, in Preventable, you're looking for risk, threats, and traps. And just an example of how to do it well, Ray Dalio, at the very beginning of the COVID thing, he looked to history. He used LFAR, learning, which is past and present. Before you have the privilege to do foresight, you have to look to past and present. He looked past and present, and he found the great books on the Spanish flu epidemic of 1919 in the U. S. Right. And he gave a nice short book on that to every single one of his strategists, and he said, Read this, and tell me how similar this is going to play out now, a hundred and ten years later. Or a hundred and some odd years later. And what a fantastic way to set up Foresight, to find a benchmark example. And of course, they were early into Xenom and these other obvious winning the biosurveillance, diagnostics, the vaccine companies. Before all those guys popped, they were into them early because they'd done this homework in the do loop. They'd looked for past examples, they were, and they'd, but that requires you To recognize at the bottom of this pyramid is that there's predictables, there's cycles, there's trends, there's models that are going to statistically, probabilistically be predictable, and then you do your exploration of your unknowns on the possible side, then you're ready for strategy, and then you have the fight between visions and risks because people, different people are going to like each of those four corners of the pyramid is a great example. In the Cursi diagnostic, you can have your whole team do, which is a version of the Myers Briggs. But it actually breaks the Myers Briggs into these four types. And in my book, I show they actually fit right onto this trapezoid perfectly. At the bottom, you have your guardians and your artisans, right? The protectors and the creators. And some of us are artisans first, some of us are guardians first. And then at the top, you've got your idealists and your rationalists. Idealists are vision oriented, rationalists are oriented to what needs fixing. And so both of those, all four of those actually are in conflicts. But as Kiersey says, it's the, we all have all four of those in our personality. So this is actually a universal thing. We have these fights with ourselves. And if we have them in the right balance, that's my big, that's my big takeaway that I want to recommend. You can get people to say, Oh, wow, this is awesome. I need to do foresight prior to strategy. Even though, as I described it to you, strategy is the last fight. We typically think of the bottom of that pyramid, knowns and unknowns, as foresight. But actually all four of those, because great foresight always ends with great strategy. So you can tell the strategists, strategic foresight is anything you do prior to strategy. If you're not doing anything, there's a 60 year tradition of all these cool tools you can use. We haven't talked about all the tools on the visioning side and on the risk side, all the risk analysis tools, all the threat assessment tools on the threat side, all the aspirational tools, all the shared visioning tools. On the, um, preferred side. So you can use tools and methods in all four of those corners of that rhombus. But the second great way to convince people that this is valuable is you talk to your churros. Don't talk to your strategist. Talk to your churros, your chief human resources officers or people ops they're often called and say, tell me about your L and D. What do you do for L and D? What kind of training learning development do you do in your organization? Do you do anything that gets them to struggle with the future? Oh, you don't even do design, I think. What? It's really interesting. Well, do a little in service on these tools. How do you think people are going to get better at personally planning their week, at personally managing the conflicts that they have? Because I didn't say this, but in my book, I have this evidence that was, has been collected on conflict. And we were talking about coopetition. All these great, sir. Yeah. Find and then compete. How do you manage conflict around those four corners between your guardians, your artisans, your, your idealists and your rationalists? How do you manage that conflict? There's all this evidence, Andy Ebbinson at Harvard, her book, The Fearless Organization, psychological safety promoted by the leaders to create trusted conflict in the strategy room. It's also called fair fighting. We recognize we actually need to have conflict. We're not going to find the best answer unless we're actually having trusted, usually confidential, and usually the leader has to model it that with that eagle perspective, ideally, but they don't have to, but they have to model it. Then you get this, what, uh, Harold and Yuri showed in the sixties is not only do you get more conflict, you get more productivity. There is a peak to that. There's a peak where you do too much conflict. And their studies showed that too much conflict degrades performance. But if you want higher quality strategy, higher quality action. You want to take your current level of conflict and scale it higher, but they have to be the right conflicts. What are they? I would argue the cover conflicts, these four assessments, and this is Art Shostak's work, the four P's, by the way. He took, he updated Toffler's three P's in 2000 and said, well, if you think about the preferable future, you have to split them into pervert and preventable, the protopias and the dystopias. And you have to recognize, last thing I want to say about bias. is in our foresight community, we're biased to think about possible futures, typically, in our futurist community. We don't think enough about probable, like we used to balance those two. And at the top, our culture and our media biases us to think about the dystopias and the preventable futures. If we don't start with a shared vision, if we don't actually create that shared vision and then think about the negatives around that vision, what we end up doing is going into all these rabbit holes. Of stuff that could happen that's low probability and is not related to our vision. You get a lot less done and you see a lot less of the opportunity to use audience term if you start defensive If you start dystopia, so you have to balance protopia dystopia I think we're always aspiring. I don't think there's ever a finished stable where all it's like Yoga practice, we're always practicing. It's always a practice. It's never a reflection. So it's advice to the practicing or the aspiring futurist. What would you offer someone just getting started in this? It's a lot. It is a lot to intake. It's an exciting space, but it's a lot to start with. Where would they? This is where I would start with network first. Think about the network first. Don't think about yourself. Don't think about the group that you grew up in or embedded in. Think about the network of individuals and groups, many of which are different from you. It. And think about the cooperating and the competing opportunities that you have within that network. And so most obvious is the practice network. And fortunately WFS disappeared in 2015, not enough financial foresight, that was their big problem. So they lost, they never built an endowment. And so they ran into a rough patch and they disappeared. But then the APF, the Association of Professional Futurists, emerged toward the end of their tenure. Actually, it emerged in 2003, I think, and then WFS disappeared in 2015. Now, Association of Professional Futurists is a thriving network and community. So, 550 of us practicing, Foresighters and futurists. You may think of yourself as a futurist, trading stories about the future and critiquing them like that Weeble story thing we mentioned, or you may think of yourself as what we call a foresighter, someone who practices certain methods. You're really good at certain foresight methods that you use prior to strategy, setting up to strategy. So you're a trend, you're a, you're an innovation scout, you're a forecaster, you're a trend monitor, you're a prediction markets manager, you're a Uh, whatever, a scenario producer, you're practicing Foresight. You're using a tool, a recipe, a framework, and a set of models from behind that. And so you've got your Foresight community, and you've got your Futures community, and they're both in this Futures space now. There's a big, wonderful network. You mentioned the Dubai Future Foundation, which is a very recent emergence of trying to get a global, Um, perspective on Foresight, uh, stepping into WFS's shoes there, World Future Societies, but also APF is totally global. So I would say the bottom line is, hey, join one of these networks and help them out. I'm part of a team that has launched a program within APF called Futurpedia. We're going to build a wiki for the world, hopefully, that looks ahead. On future of X. So not only does it have bios of futurists, past and present, nice summaries of foresight methods, case examples of foresight that's been used. And poorly, we're also going to include future of X, right? Topic pages, where you get to see the schools of thought, the current discussions around the future of work, the future of basic income guarantees, the future of anything you're interested in. You'll see what the dialogue is. In these four P's that we described, starting with the knowns and the unknowns and the visions and the risks. And so we're building this right now at APS. And I'm so honored to be part of a team. It was Zan Chandler started this at APS. She said, we need a futures encyclopedia, futures wiki. And those of you who use Wikipedia may know you can't put a page on Wikipedia that is future of X. They consider it speculation. So we need. A encyclopedia of speculation on what may be coming. And so this is an encyclopedia. So an encyclopedia of galactic futures work. It's an encyclopedia of foresight. Yeah. It is a, Wikipedia is wonderful past and present. We also need a Wikipedia of the future. And I'm not saying ours is necessarily going to be the one that goes global in all languages. But I am seeing there's 550 futurists and foresighters at APF who are going to start putting stuff up that's useful to them in their practice. And hopefully once we get to X thousand pages, everybody else will be doing the same. And it'll be this open conversation, continual conversation that we have about the future. We can use these generative AI tools, which we haven't talked about yet, to create a lot of these stubs, right? But in the end, we need to edit those stubs and make sure they properly reflect the incredible value of foresight in organizations. And by doing that, we're actually training the AIs to better understand this whole field by creating these great pages, by creating these resources, because these AIs, I would argue, and this is a future episode, my natural alignment series on Substack, natural alignment, is that they're actually becoming versions of us, that I think evolutionary and developmental algorithms in biology are the way we manage complexity in living systems, And they're going to be forced into using those systems, becoming more like us, having attention, having generative adversarial processes, having processes that simulate our neurotransmitters, having emotion, having world and self models, having empathy and ethics. I would argue that is so valuable for creating alignment in living systems. They're going to have to have a deep version of that, which means short story is the digital version of Steven. You're going to feel like it's you. It's going to start completing your sentences when you have a senior moment. It's going to be something you leave behind for your kids when you pass on. It's going to be this super helpful thing that's learning like a kid its entire life. And we're now just now on the edge of that, what we call to use your age model, age of AI. We're on the edge of that. So a lot of bright possibilities and risks, but a lot of bright possible and preferable and preventable futures in that. It really makes me excited and scared that there's an adult ADH digital twin of me somewhere. But when you look at agents and functioning in that way, it's specific narrow bands of that. And I've always said on this podcast and in other venues is that I was very anti AI for a very long time. Especially as an artist and what it was doing to steal artists work. The more I've used the tools and I look at it as a collaborative partner, the question is a designer is never going to get replaced until they have empathy. Now question is, if it's a clone of us with that empathy that's there, it becomes a broader way of us thinking that versus some other foreign agent being the designer or being the thought process in that. Yes. This is something, it's mostly bottom up. It's mostly, in living systems, it's mostly bottom up that creates all the innovation and diversity. And so the personal AIs, the PIs, the AIs that are coming, they're going to have way more power and capability than, as a network, than the top down AIs that the corporations are currently using in the first gen, when the tools are not cheap enough to democratize, when your cell phone doesn't understand you, doesn't have a private data model that you're training, that's more useful than the one the marketers are using top down to market or target you. And that finally happens when you have that thing that's helping, caring for your values and your interests and how you vote, how you spend your money, who you connect with, what you read. And you have a personal AI that has all of those dimensionalities that you feel is a tight connection to you. And that data model's private, just like you text, your photos, and your email. That's a level of trust. I would call it network empowerment, bottom up empowerment that will help revitalize the democratic network and the lowercase D we're talking about here in, in Western cultures. What's interesting. I'm just going to leave your viewers to think about this. Yeah. How are the top down networks like the, you know, the social credit score that the Chinese and Russian and other autocracies, how are they going to use those tools, they're going to be way more restrictive than the ones we use. And I think it really, it's going to really show this open network approach is the future. Networks are always winning. They're the super adapters. Individuals win, sometimes groups win. Networks are always winning. Life itself is an immortal network that has never died, has always created more diversity and more intelligence at the top. Both at the same time. Because we have these information, genetic, sensor, simulation networks underlying that whole system, protecting it. That's the really crazy thing about life, right? 10 percent of our genome is retroviral insertion sequences. Viruses are jumping between us, creating 10 percent of our genetic diversity. We are a network, more fundamentally than we are an individual or even a group. That's the future, I would argue, of complexity science, is recognizing that it turns into network science and we start to understand All three of those corners, right? The individual group and the network have to be center of our strategy. So you've contributed a great deal to the field and it's been amazing to watch. How would you want, when you're looking back, how do you want your work to be remembered? What impact do you hope to have? I just, I hope people see that kind of foresight is a superpower. It's built into all living systems. And, um, the more conscious we get with it, the better. And the more we can port it to these, um, digital versions of ourselves that are going to be so much more complex, so much faster, but also they're going to be human just like we are. They're going to be human. The first human act, picking up a, picking up a rock, I would argue that. And so when several other anthropologists, not just picking up a rock though, picking up a sharp rock, using our foresight to realize can protect us against all these powerful things that were eating us like leopards, but not just if we held it. Every other human had to have it. We found these early flaked rock quarries where there's a thousand of them. We were mass producing flaked rocks from the very beginning of technology use to share as a group, as a network. And so I would argue the really amazing thing is that this human future, I don't like this word post human because I think the AIs are going to become human. I think we're going to become more human. As Bucky Fuller said, humans, not a noun, humans are verb. So we're constantly changing, growing. How do we do it? We do it with foresight. We do it with ethics and empathy. We do it pro social activities. We do it as a network. And if we recognize that maybe we take a lot of the stress. about the future, where I have to rescue the world? No. The network's taking care of itself, man. It's really improving. Look at all these adaptations to climate change, right? This sustainability era we're going into. We have all these powerful tools to visualize and create a better future. My legacy, I would hope, Stephen, is to say, look, nature has so much more wisdom, it's so much deeper at balancing these critical processes than we give it credit for. And if we really want to understand the future of society and technology and strategy, we've got to keep looking to these complex systems that have been so good. And we have to see these protecting networks, immune systems, circulatory systems, thinking systems, social knowledge networks, competitive networks inside of digital platforms. And we see those networks, I think we can adapt much better. And that's the big insight for me. Book I want to recommend your listeners. If you want to know the complex systems, systems theory around this is called The Romance of Reality by Bobby Azarian. So my EvoDevo Institute, we like to recommend books on how do you apply these evolutionary and developmental ways of thinking. This is the first book, this came out last year, The Romance of Reality, and it looks at life, and physics, and chemistry, and biology, and society from this evo devo and network centric perspective. And it talks about this idea that there are these universals that we're just getting smarter at seeing that help us understand what the critical themes are. And so that, that would, I hope, be my legacy is helping people realize that nature's figuring it out first and we just get smarter at studying that and then applying it in our teams and our own lives. And we're just going to get so much higher quality, foresight and action. Now it truly is a superpower to make this unconscious thing that we do conscious. Well, that's a great way to close this. And for those who want to find you, there's many places we'll put in the show notes, but somebody wanted to listen to your words and check out your writing. Like where would they reach out to you? I just recommend again, that book, good place to start foresight, you. com slash I am on Twitter and I'm on Substack. I would recommend that as the start. And then I'd come back again to this, join a network, volunteer at the APF. It's all, it's almost all volunteer organization, those of us. And it's pretty strong now. Every time we get another member, so much more diversity of opinion, so much more insight, join a practice network. And pay your dues, not very expensive. Find a topic, a project you want to work on. And I'll be there, man. I'll be there. I'd love to chat with you about how we apply these, how we apply foresight and recognize that it's a superpower. John, thanks for your time today. And it's been an incredible conversation. Look forward to having you back on again. And for those listeners out there, this is one of the best futurists in the field, take a knee and take a listen from a great futurist, so thanks again, John. Thank you, brother. Really appreciate it. Thank you. Thank you, listeners. Okay.

People on this episode