
IN CLEAR FOCUS: Strategic foresight consultants Scott Smith and Susan Cox-Smith discuss Foom, an immersive strategic simulation for exploring AI futures. Unlike static scenario planning, Foom creates environments where teams experience real-time consequences of decisions. As participants navigate progress toward Artificial General Intelligence, this “escape room for strategy” reveals insights about decision-making, coalition-building, and managing uncertainty in emerging technology landscapes.
Episode Transcript
Adrian Tennant: Coming up in this episode of IN CLEAR FOCUS:
Susan Cox-Smith: AI has created both opportunities and challenges for everyone, including us. One of the biggest is how do you make decisions about something you’ve never used around market landscapes that don’t yet exist?
Scott Smith: Foom comes at it like a game where the shape and the intensity of that future depends on the decisions that you’re making and the other stakeholders involved are making at the same time.
Adrian Tennant: You’re listening to IN CLEAR FOCUS, fresh perspectives on marketing and advertising produced weekly by Bigeye, a strategy-led full-service creative agency growing brands for clients globally. Hello, I’m your host, Adrian Tennant, Chief Strategy Officer. Thank you for joining us. Organizations turn to strategic foresight to navigate the future in an increasingly uncertain geopolitical landscape. This is especially true with artificial intelligence, where leaders must make decisions today about technology with unclear long-term impacts. Traditional scenario planning creates static future projections, but our guests today are pioneering a new approach. Scott Smith and Susan Cocksmith are the co-founders of Changeist, a strategic foresight consultancy. Scott has over 25 years of experience advising organisations including UNICEF, Comcast, and JP Morgan Chase, while Susan, named one of Forbes’ leading female futurists, brings her expertise in research, writing, and educational design. Now, since their last appearance on IN CLEAR FOCUS in the fall of 2023, when they discussed their books, “How to Future,” and “Future Cultures,” Scott and Susan have developed an innovative simulation that allows organisations to explore AI futures by making decisions and experiencing the consequences in real time. To discuss simulating AI futures, I’m delighted that Scott and Susan are joining us today from Barcelona, Spain. Welcome back to IN CLEAR FOCUS.
Susan Cox-Smith: It’s great to be here.
Scott Smith: Thanks for having us back.
Adrian Tennant: Scott, I mentioned in the intro that you are returning guests, but for listeners who may not be familiar with Changeist, could you briefly explain what the firm does and your approach to futuring?
Scott Smith: Fundamentally, we help organizations make sense of what’s next. That takes a lot of forms and for a lot of types of organizations, companies, governments, cultural organizations, and increasingly the public. So we are big believers in active approaches to futuring. That’s why we put the “ing” at the end of it, futur-ing. So we think of these as grounded explorations of possible futures through application, through actually doing and not just thinking. We do research by doing, and we very often create new tools or approaches that help that exploration reach the audience or the decision makers in question, and that will have impact to make some kind of change with the work that we’re doing. So that could be creating a signal collection system for an NGO to help them do better at future sensing, creating an object from 2040 to help explore uncertainty for a government team. shaping a future museum experience for public audiences, future visions for big brands, or even just training like civil servants to be better at anticipating change and uncertainty. So we believe in verbs, I think.
Adrian Tennant: Susan, as someone with a background in design who was, as you put it, lured away into futures work, how would you describe the complementary skills you and Scott bring to Changeist?
Susan Cox-Smith: I would say that as a designer, I was tasked with solving problems for my clients. That is, what is the best way to present content in a way that will resonate with the audience, that will engage with it? Is that a photograph, a block of text in a pleasing grid? some inventive packaging and animation, or short film. And Scott and I are able to use these content frames in our futures work as well, but with a different purpose. We design scenarios and speculative objects to present possible futures that resonate with the audience and prompt them to engage in a meaningful way. Scott has an amazing facility for pushing the range of possibility that really improves our client work. And my design and writing skills help make these possibilities feel provocative, yet safe and familiar enough to push back at or interrogate their meaning. I often go back to Raymond Loewy’s concept of MAYA: most advanced yet accessible. It’s a great guideline for doing teachers’ work.
Adrian Tennant: Well, Scott, a lot has happened since our last conversation, particularly in artificial intelligence. How has the rapid development of AI over the past 18 months or so influenced your work at Changeist?
Scott Smith: I think, like everybody else, we’ve been compelled to explore it, but we’ve really tried to kind of explore it from the inside. As I said, we’re big believers in verbs. We go at the thing we’re working with. So we’ve explored it from the inside to better understand it. And it’s a huge frontier risk for our clients, but also for everybody. It’s what one philosopher calls a hyper object. You know, it impacts every sector, but we don’t really know the shape and size. So we felt like we needed to push and pull on some of the tools that are out there to understand where it can add value and where it just creates slush. You can’t just let it go on autopilot and generate scenarios all day. Cause you know, have garbage in garbage out. There’s no mediation. So you get what we lovingly call mid-jerky. experimenting with it as a tool and an expander and not just a crutch kind of allows us to ask what’s possible that will, you know, extend our work in new directions or not. Where shouldn’t we go? Where isn’t it useful? So designing and running Foom was actually one concrete manifestation of this.
Adrian Tennant: Let’s discuss the strategic simulation you’ve developed to help organizations explore possible AI futures. So Susan, could you explain what Foom is and why you created it?
Susan Cox-Smith: I’ll do my best. We’ve had to create some new language to describe the Foom project. We’re calling it an immersive strategic simulation designed to help organizations explore possible futures of AI and the decisions they need to make around that. AI has created both opportunities and challenges for everyone, including us. One of the biggest is how do you make decisions about something you’ve never used around market landscapes that don’t yet exist? So lots of organizations are already facing these same challenges. Lots of people are being asked to make big, risky bets without having a chance to play through the possible scenarios first. Foom uses elements of scenarios, wargaming, and foresight, as well as things people are more familiar with, like RPGs, immersive theater, and competitive socializing, to get in the insights from the former while learning from the experience and engagement of the latter. And it also has a little engine in the back that’s powered by AI, which Scott can talk to you about later.
Adrian Tennant: Okay, well that seems like a perfect time to ask. So Scott, how does Foom differ from traditional scenario planning exercises? What makes it unique?
Scott Smith: So for people who aren’t familiar with scenario planning, traditional scenario planning brings a group of experts together. It catalogs their knowledge and assumptions, identifies some big uncertainties, and uses those uncertainties to generate, let’s say, four possible exclusive futures. Some people may have seen the two-by-two matrices that you get from McKinsey or other big consultants. And those are sort of either-or worlds. They either exist or they don’t. And so they’re based on a set of conditions that say the world is likely to evolve down this pathway or that pathway, generalizing. But that’s the essence. So they sketch out these big macro futures that are dependent on a few key assumptions being true. So you can think about it as a process that creates four stories, it locks them in a nice document, you get your invoice from the consultant at the end, you can assess your strategy, and then they probably end up on a shelf somewhere. Foom comes at it more like a game or a film or a play where you’re all in a common future that’s unfolding in front of you and the shape and the intensity of that future depends on the decisions that you’re making and the other stakeholders involved are making at the same time, multiple teams. To steal a phrase from Donald McKenzie, it’s an engine, not a camera. It’s running, not making a snapshot. Foom kind of gives participants challenges and evolves the situations that we’ve designed around them each time through their decision-making. So it allows you to kind of walk around in the future you generate, to deal with it, to make decisions based on it, to forge coalitions that may be able to help you make progress or constrain things, and really to reckon with the consequences of those choices. So the phrase always comes to mind for me is you push the world and the world pushes back in Foom.
Adrian Tennant: In Foom, participants are divided into different stakeholder teams. I’m curious, why this particular structure and what have you observed about how people inhabit these roles?
Susan Cox-Smith: For this particular iteration of Foom, focused on AGI, we divide the participants into five teams, users, which represent the public, business, developers, policy makers, and activists. It’s always been our practice that we try to mix up teams as much as possible and put people in stakeholder positions that are not their usual place. So if we have devs or C-suite executives in the room, they might be assigned to play a user or an activist. will put someone who knows nothing about AI on as a policymaker or even as a developer. It forces people to see the world from a different point of view.
Adrian Tennant: I love that. Scott, I know you’ve run Foom with various organizations worldwide, including government bodies, media companies, and creative groups. Have you noticed regional or sectoral differences in how participants approach AI governance and decision-making?
Scott Smith: We’re definitely beginning to see what we think are common patterns. There are definitely different cultures, for example, around innovation versus regulation. Just, you know, you can imagine we haven’t run it yet inside the US, but you can imagine it’s a very innovation focused culture, you know, where AI is very strong and kind of economic imagination. But clearly there are other places in the world with a precautionary principle where regulation comes first or strongly. So, you know, we see that kind of shift in different areas of the world. It also speaks a little bit to like how competitive that economy feels like it needs to be. Some places, for example, we’ve seen it’s acceptable that policymakers will kind of step in early and protect businesses and the public as a first principle. versus where the regulator’s job might be to kind of be a bit more invisible, step back and let innovation roll on. And they don’t always flag that at the beginning. We have to kind of watch it emerge over time through their actions. So there’s that. One consistent element seems to be that no matter where we are, the public, consumers, users are overlooked, which I think is true in the world in general. People tend to you know, make deals and make announcements without speaking for the customer, but not really taking into consideration their strong needs. So here, you know, you’ll see the user group reach out to policymakers or activists or both as a shield. These teams have weightings that reflect their power in the real world, we think. AI developers, they don’t need anybody. They’re happy to make their own world. We’ve had, in one instance, the AI developers basically literally get up and leave the room as a representation of, you know, we don’t need this. Business over there says they can do it themselves. Fine. We’re off to our own island. So it feels very much like the real world, doesn’t it? So figuring out how to like, you know, build coalitions is a really important part of this. Like who else do I need to work with and not, you know, go alone in the world to reach my end goals. And so that’s a big difference. I think with film, we asked the teams at the beginning. to define who are you, what is winning in your view, you know, the people actually on that team, and it’s so different every time. And then what would you give up in order to, you know, reach your objectives or your goals? And, you know, who can help you get there? So those patterns definitely have variations depending on where we are.
Adrian Tennant: I’m curious as well, what’s the typical timeframe that you’re looking at? What’s the time horizon for a Foom simulation?
Scott Smith: The time horizon generally, we say we’re starting about six months in the future. So we’re just far enough forward that there’s a little room for there to have already been some change, but we’re not throwing them into 2050. It is intentionally not a clock. We don’t think in terms of like quarters, months, years. We run it kind of in progress time, whatever that is. And of course we see AI progress happen. You know, you can have a year and a week. and then it will slow, so you’re kind of intentionally playing without a ticking chronological clock in the background.
Adrian Tennant: That’s super interesting. Susan, as the facilitator of these sessions, what’s your role during a Foom simulation, and how do you help guide the participants through the experience?
Susan Cox-Smith: Right. Well, I’ll say as a former theater kid and group fitness instructor, I feel pretty comfortable taking on the role of host. I’m less of a facilitator, though we do usually have someone in the room in that capacity, and I’m more of an emcee. Once we get started, I can push and pull the different teams by encouraging them or asking direct questions about their decision making. For example, in one session, I quote unquote, accidentally called the user’s team, the loser’s team, because they were being very passive and accepting of all the decision that were being made around them. And it definitely got them to be more aggressive in their demands to the other groups. And it really made the experience more intense and fun. So that’s pretty much a good example of, you know, how I try to run the room.
Adrian Tennant: Scott, one aspect of Foom that seems particularly powerful is participants’ immediate feedback on their decisions. Can you share a specific example of how this dynamic response system has led to a meaningful insight or an aha moment for participants?
Scott Smith: So just to help people understand, so in Foom, each round contains headlines about the world that kind of give teams their sense of what’s happening, what kind of progress has happened, what changes have happened, what’s going on, who’s developing or launching what. And so having seen these and had time to digest them, and they kind of happen at different levels, the teams didn’t make a decision. They come to a kind of common vote as a team as to whether they want to continue development. Do we let the world keep rolling on as it is? Do we regulate it or put some kind of guardrails around it or do we stop it? And so the aggregate of these votes is what drives AI progress towards, as Susan mentioned, AGI or superintelligence, kind of towards a target. We’ve got a GPT trained in the background, an AI that looks at those votes and anthropomorphizes it, something I hate. It says, huh, okay, maybe those decisions you made will work out, but maybe they’ll have unintended consequences as well. That’ll trigger something in the world or some change that you didn’t see coming. And that’s where the sort of the light bulb starts to go off. It creates this kind of new headlines, illustrates that, and gives them the new world. There’s a moment people take and they’re like, wow, everything we decided didn’t work out the way we thought it would. Actions have reactions. Power shifts in the room. Markets may slow more than you want, or they may speed up more than you want. So they’re always trying to figure out how to ride that dragon. And that responsiveness, the feedback in the game, you know, I said, you push the world, it pushes you. is there to kind of build a sort of strategic muscle memory. Oh, okay, I’m actually getting to walk through this process without breaking anything. I can make mistakes. So it’s like training in sports, but it’s unpredictable. So the closer we get to the end of the session, if AI is sort of progressing too quickly, people may start seeing how their choices have added up. And more often we see them kind of go, I wouldn’t say aha, but more like, oh, blank. I should have thought about something earlier. I should have cared earlier. I should have been pushing earlier. So there’s definitely a common factor there of like teams may try to get as close to, you know, the sort of super intelligence losing control as they can, but still maintain the power of AI, but pull back from the edge. It’s pretty hard when you’re driving a vehicle with five other teams to figure out where the brakes are. or where the gas is. And so that’s a big aha. It’s like, oh, we need to be actually collaborating and talking to other parties and other stakeholders here. So I think that’s been the biggest one that we kind of enjoy seeing take shape.
Adrian Tennant: Let’s take a short break. We’ll be right back after this message.
![]() | Alan Barker: Hello, I’m Alan Barker, the author of “The Complete Copywriter: The Definitive Guide to Marketing with Words,” published by Kogan Page. I’ll show you how to exercise your creativity, generate powerful ideas, maintain reader attention, and bring your copy to life. You’ll also learn how to develop a coherent content strategy, how to survive as a copywriter, and how to nurture a satisfying career. Whether you’re a professional writer already, a brand manager, or someone who creates content as part of another kind of job, this book will help you to develop the skills to craft compelling, customer-focused copy. As a listener to IN CLEAR FOCUS, you can save 25 percent on “The Complete Copywriter” when you order directly from Kogan Page. Just enter the exclusive promo code BIGEYE25 at the checkout. Shipping is always complimentary for customers in the US and UK. I hope my book helps you to become a more versatile, effective, and confident copywriter. Thank you! |
Adrian Tennant: Welcome back. I’m talking with Scott Smith and Susan Cox-Smith, co-founders of strategic foresight consultancy Changeist, and creators of Foom – a simulation for exploring AI futures. Susan, when we were preparing for this podcast, you mentioned that a friend of yours described Foom as an escape room for strategy. I love that metaphor. Could you elaborate on what that means in practice and how the simulation creates that immersive high-pressure environment?
Susan Cox-Smith: That description came to us from an early conversation we had with a very smart person who knows media, communications, and entertainment. And the conversation was very much around, you know, how do we describe this? You know, or how do you envision this? Or what, you know, what comes to mind? And it was literally a throwaway line as he was mulling over our explanation of how we wanted Foom to work as an experience. So in every iteration of play, that phrase has become more accurate. And we feel very good about that because his description sort of set us a goal. We’re working right now on adding some new elements that turn up the temperature in each round, so to speak. So as the progress far gets closer to AGI, and we already use the responsive content around the live newscast that reflects the decision making in the previous rounds. But about round three, participants really start to understand what’s happening when I’m also giving them some new information about how their team votes are impacting progress, or maybe they’re not, actually. So there’s sort of a light bulb moment just around round three when people do start to get that understanding that, like, we have to be strategic about what we’re doing here. We can’t just have a discussion around the table and go, oh, well, we just think we’re going to continue because we like where things are going.
Adrian Tennant: How long does a Foom simulation typically last?
Susan Cox-Smith: Typically between two and a half and three hours. It depends on how much time we can get for a little bit of a debrief at the end, where we can have a conversation with the participants about how they understood the experience.
Adrian Tennant: Excellent. Beyond AI-specific insights, what have you discovered about organizational decision-making more broadly through running Foom sessions?
Scott Smith: Wow, there’s so many little things that come up and we keep notes about them and try to debrief ourselves afterwards so we can track over time, you know, what are the big learnings that we’re getting from watching these different groups? I mentioned choice regret, you know, wishing they had been more active earlier, not focusing on having influence early enough is a big issue. And I think that could be often a case in big organizations. They’ll wait until they’re forced into a position or forced to decision making. I also talked about that repeated powerlessness of the public versus insiders. The big players can shape the game and you as the public have to kind of take what happens. I think one pleasant surprise that we’ve found is how much people lean into the role playing. They exercise their team identity and really get into it from the beginning. We didn’t anticipate that. It was a minor kind of feature at the very beginning and we saw how much people jumped in with both feet. that ability to push the edge in a risk-free environment, to take positions you wouldn’t normally take because you can’t without risk at work. They’re doing it. I mean, some are having fun. Actually, they’re all having fun. There’s a lot of enjoyment, but they’re doing it to test the edges. And some may be accelerationist developers or a ranty commenter. We’ve had a wonderful early user group that was a really mixed bag of cranks. And they were pushing back from the excerpts. Or we have these hyper-cautious policy makers. We had one team that was really funny. They were younger government players just coming into civil service. And we worked out over time, they were kind of playing their more conservative bosses, both to sort of like show what happens when you’re that conservative. But I think they’re also having a good time kind of walking around in their seniors’ shoes. So it can be kind of telling they’re doing it for reasons that maybe they understand best.
Adrian Tennant: Susan, your work has always emphasized making futures thinking more accessible to wider audience. How does Foom help you further that mission?
Susan Cox-Smith: Well, funnily, I always say that we set futures thinking aside for futures doing. That is, actively participating in signal scanning and critical thinking about possible futures. Anyone can do it. Foom is very public-facing, and our intention is to help non-futures people understand that they can be better prepared for strategic and even visionary decision-making if they just build that muscle and become more comfortable with uncertainty. Foom is ostensibly about reaching AGI, but it’s really about making thoughtful, reasonably informed decisions about future possibilities.
Adrian Tennant: Got it. I’m curious, Scott, what kind of preparation goes into customizing a Foom session?
Scott Smith: So, and before we actually go in and talk to those organizations, we’re keeping our own kind of research and horizon scanning around what’s happening in the general market, but also regionally or sectorally. So we kind of start from that basis. We’ll then talk to somebody on the organization side and try to work out what are the kind of key pressure points? Like what are the tensions? What are the issues that they’re struggling with or trying to decide about? What are some things that if we moved them or altered those factors might kind of create uncertainty? Because we want that in the game. We want there to be, or in the kind of exercise, there needs to be something that is meaningful. And once we sort of identify those, we can actually build them into the prompt and the training that we’re using for the GPT to help us shape the trajectory of the scenarios and the headlines in the right direction, making sure where, you know, it’s kind of like going to physiotherapy. Are we pushing on the right kind of knotty points in the organizational muscle and trying to figure out how to make it work better? So I’d say, you know, that outside knowledge plus the inside sense of what’s critical comes together. And then we’re able to use that so that no two games are the same. No two sessions have the same content, worlds, dynamics, et cetera.
Adrian Tennant: Do you have any plans to extend Foom beyond simulating AGI futures?
Scott Smith: It was always meant to be something of a kind of simulation engine that we could put other frontier risks on top of. To take AI out, for example, and one big issue that I’ve been looking at for a number of years now is climate engineering, something very few people understand, but could be a huge risk if it’s actually seeding clouds, trying to change the weather, you know, trying to actually intervene to deal with global warming. One of our very first test runs of this involved that topic, and it was fascinating because you can’t call in a consultant to fix the climate. You have to make some really big political decisions that can affect the country next door, for example. So things like epidemic and pandemic management, timely. Dealing with rogue political actors, very timely. So there are other applications for this, we think, just beyond looking at HDI and AI.
Adrian Tennant: What’s next for Foom and Changeist?
Scott Smith: So we try to keep this balance between strategic value and experience. It can’t lean too far into just the kind of tabletop war game, but it also can’t go too far into the theater. So the two feed each other if they’re done right. So the engine underneath is getting a bit more sensitive. We learn every session, we tune, we find new points to intervene and inject uncertainty in the places that it’s most helpful. We’re, practically speaking, added a consultative follow-on so that when someone comes out of a session and we have these really interesting insights in the debrief, we can say, great, now how do we use that? That was always the intention. We wanted to get the experience right first, but how do I apply that to my organization? We already create a pretty sizable report that just comes out of the behavior in the game itself or in the experience itself. But we’re also investing in setting the mood. We definitely feel like that has an impact. It’s sort of a magic circle. People walk into that space. The lighting is different. The soundscape is different. How the world literally responds in terms of sensory, you know, interaction has a difference. So the seven channels of news, we have a fantastic live newscaster that works with us. So there’s a lot going in here. It’s not just your boss’s board meeting. It’s an experience with a difference.
Adrian Tennant: Susan, for organizations or individuals interested in experiencing Foom or working with Changeist, what’s the best way to connect with you?
Susan Cox-Smith: The easiest way to learn more about Foom is to go to our website, and that is foom.live, F-O-O-M dot L-I-V-E. And then just general contact would be at our Changeist website. We’re also on Blue Sky, Instagram, and LinkedIn.
Adrian Tennant: Scott and Susan, as always, a fascinating conversation. And thank you so much for being our guests again this week on IN CLEAR FOCUS.
Scott Smith: We’ve loved it. Thank you.
Susan Cox-Smith: Thank you.
Adrian Tennant: Thanks again to my guests this week, Scott Smith and Susan Cox-Smith, co-founders of strategic foresight consultancy Changeist, and the creators of Foom. As always, you’ll find a complete transcript of our conversation with timestamps and links to the resources we discussed on the IN CLEAR FOCUS page at Bigeyeagency.com. Just select ‘Insights’ from the menu. Thank you for listening to IN CLEAR FOCUS, produced by Bigeye. I’ve been your host, Adrian Tennant. Until next week, goodbye.
TIMESTAMPS
00:00: Introduction to AI Opportunities and Challenges
00:24: Welcome to IN CLEAR FOCUS
00:47: Strategic Foresight in Uncertain Times
01:08: Introducing Scott Smith and Susan Cox-Smith
01:51: Exploring AI Futures with Foom
02:17: What is Changeist?
03:31: Complementary Skills in Futures Work
04:43: Impact of AI on Changeist’s Work
05:58: Introducing Foom: The Strategic Simulation
07:25: Foom vs. Traditional Scenario Planning
08:16: Stakeholder Teams in Foom
10:04: Regional Differences in AI Governance
12:58: Time Horizon for Foom Simulations
13:34: Role of the Facilitator in Foom
14:32: Dynamic Feedback in Foom
17:28: Escape Room for Strategy: Foom’s Immersive Experience
20:26: Duration of Foom Simulations
20:49: Insights on Organizational Decision-Making
22:49: Making Futures Thinking Accessible
23:30: Preparing for a Foom Session
24:52: Extending Foom Beyond AI Futures
25:48: Future Developments for Foom and Changeist
27:15: Connecting with Changeist and Foom
27:43: Conclusion and Thanks