For at give dig den bedst mulige oplevelse bruger dette websted cookies. Gennemgå vores Fortrolighedspolitik og Servicevilkår for at lære mere.
Forstået!
Indhold leveret af Ross Dawson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ross Dawson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app Gå offline med appen Player FM !
Episode web page: https://tinyurl.com/2b3dz2z8 ----------------------- Rate Insights Unlocked and write a review If you appreciate Insights Unlocked , please give it a rating and a review. Visit Apple Podcasts, pull up the Insights Unlocked show page and scroll to the bottom of the screen. Below the trailers, you'll find Ratings and Reviews. Click on a star rating. Scroll down past the highlighted review and click on "Write a Review." You'll make my day. ----------------------- In this episode of Insights Unlocked , we explore the evolving landscape of omnichannel strategies with Kate MacCabe, founder of Flywheel Strategy. With nearly two decades of experience in digital strategy and product management, Kate shares her insights on bridging internal silos, leveraging customer insights, and designing omnichannel experiences that truly resonate. From the early days of DTC growth to today’s complex, multi-touchpoint customer journeys, Kate explains why omnichannel is no longer optional—it’s essential. She highlights a standout example from Anthropologie, demonstrating how brands can create a unified customer experience across digital and physical spaces. Whether you’re a marketing leader, UX strategist, or product manager, this episode is packed with actionable advice on aligning teams, integrating user feedback, and building a future-proof omnichannel strategy. Key Takeaways: ✅ Omnichannel vs. Multichannel: Many brands think they’re omnichannel, but they’re really just multichannel. Kate breaks down the difference and how to shift toward true integration. ✅ Anthropologie’s Success Story: Learn how this brand seamlessly blended physical and digital experiences to create a memorable, data-driven customer journey. ✅ User Feedback is the Secret Weapon: Discover how continuous user testing—before, during, and after a launch—helps brands fine-tune their strategies and avoid costly mistakes. ✅ Aligning Teams for Success: Cross-functional collaboration is critical. Kate shares tips on breaking down silos between marketing, product, and development teams. ✅ Emerging Tech & Omnichannel: Instead of chasing the latest tech trends, Kate advises businesses to define their strategic goals first—then leverage AI, AR, and other innovations to enhance the customer experience. Quotes from the Episode: 💬 "Omnichannel isn’t just about being everywhere; it’s about creating seamless bridges between every touchpoint a customer interacts with." – Kate MacCabe 💬 "Companies that truly listen to their users—through qualitative and quantitative insights—are the ones that thrive in today’s competitive landscape." – Kate MacCabe Resources & Links: 🔗 Learn more about Flywheel Strategy 🔗 Connect with Kate MacCabe on LinkedIn 🔗 Explore UserTesting for customer insights for marketers…
Indhold leveret af Ross Dawson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ross Dawson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Next-level thinking, sense-making, and decisions for an accelerating world
Indhold leveret af Ross Dawson. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ross Dawson eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Next-level thinking, sense-making, and decisions for an accelerating world
“We humans often tend to be very restricted—even when we are world champions in a game. And I’m very optimistic that AI will surprise us, with very different ways of solving complex problems—and we can make use of that.” – Jennifer Haase About Jennifer Haase Dr. Jennifer Haase is a researcher at the Weizenbaum Institute, and lecturer at Humboldt University and University of the Arts Berlin. Her work focuses on the intersection of creativity, Artificial Intelligence, and automation, including AI for enhancing creative processes. She was named as one the 100 most important minds in Berlin science. Website: Jennifer Haase Jennifer Haase LinkedIn Profile: Jennifer Haase What you will learn Stumbling into creativity through psychology and tech Redefining creativity in the age of AI The rise of co-creation between humans and machines How divergent and reverse thinking fuel innovation Designing AI tools that adapt to human thought Balancing human motivation with machine efficiency Challenging assumptions with AI’s unconventional solutions Episode Resources Websites & Platforms jenniferhaase.com ChatGPT Concepts & Technical Terms Artificial Intelligence (AI) Human-AI Co-Creativity Generative AI Large Language Models (LLMs) ChatGPT GPT-4 GPT-3.5 GPT-4.5 Business Informatics Psychology Creativity Divergent Thinking Convergent Thinking Mental Flexibility Iterative Process Everyday Creativity Alternative Uses Test Creativity Measures Creative Performance Transcript Ross Dawson: Jennifer, it’s a delight to have you on the show. Jennifer Haase: Thanks for inviting me. Ross: So you are diving deep, deep, deep into AI and human co-creativity. So just to hear—just back a little bit—sort of how you’ve embarked on this journey. I mean, love to—we can fill in more about what you’re doing now. But how did you come to be on this journey? Jennifer: I would say overall, it was me stumbling into tech more and more and more. So I started with creativity. My background is in psychology, and I learned about the concept of creativity in my Bachelor studies, and I got so confused, because what I was taught was nothing like what I thought creativity was—or how it felt to me. It took me years to understand that there are a bunch of different theories, and it was just one that we were taught. But that was the spark of the curiosity for me to try to understand this concept of creativity. And I did it for years. Then, by pure luck, I started a PhD in Business Informatics, which is somewhat technical. The lens of how I looked at creativity shifted from the psychological perspective more into the technical realm, and I looked at business processes and how they are advanced by general technology—basic software, basically. Then I morphed—also, by sheer luck—I morphed into computer science from a research perspective. And that coincided with ChatGPT coming around, and this huge LLM boom happened two, three years ago. And since then, I’m deeply in there. I just fell, fell in this rabbit hole. Ross: Yeah, well, it’s one of the most marvelous things. So the very first use case for most people, when they first use ChatGPT, is: write a poem in the style of whatever, or essentially creative tasks. And pretty decently does those to start off—until you sort of started to see the limitations at the time. Jennifer: Yeah, and I think it did so much. It’s so many different perspectives. I think we—as I said, I studied creativity for quite a while—but it was never as big of a deal, let’s say. It was just one concept of many. But since AI came around, I think it really threatened, to some part, what we understood about creativity, because it was always thought of as this pinnacle of humanness—right next to ethics. And I think intelligence had its bumps two or three decades ago, but for creativity, it was rather new. So the debate started of what it really means to be creative. I think a lot of people also try to make it even bigger than it is. But I think it is as simple as—a lot about creativity is, for example, in terms of poets—poetry is language understanding, right? And so LLMs are really good at it. And it’s just the case. It’s fine. I think we can still live happy lives as humans, although technology takes a lot over. Ross: Yes. So humans are creative in all sorts of dimensions. AI has complementary—let’s say, also different—capabilities in creativity. And in some of your research, you have pointed to different levels of how AI is supporting us in various guises—through being a tool and assistant, through to what you described as the co-creation. So what does that look like? What are some of the manifestations of human-AI co-creativity, which implies peers with different, complementary capabilities? Jennifer: Yeah, I think the easiest way to look at it is if you imagine working creatively with another person who is really competent—but the person is a technical version of it, and usually we call that AI, right? Or generative AI these days. So the idea is that you can work with a technical tool from an eye-to-eye level. Really, the tool would have a—well, now we’re getting into the realm of using psychological terms, right—but the tool would have a decent enough understanding so it would appear competent in the field that you want to create. I think the biggest difference we see to most common tools that we have right now—which I would argue are not on this level yet—tools like ChatGPT and others, they follow your lead, right? If you type in something, they will answer, sometimes more or less creatively. But you can take that as inspiration for your own creativity and your own creative process. And that really holds big potential. It’s great. But what we are envisioning—and seeing in some parts already happening in research—I think this is the direction we’re going to and really want to achieve more: that we have tools that can also come up with ideas, or important input for the creative problem. Not—when I say on their own—I don’t mean that they are, I don’t know, entities that just do. But they contribute a significant, or really a significant part of the creative process. Ross: So, I mean, we’ll come back a little bit to the distinctions between how AI creativity contrasts to human creativity. But just thinking about this co-creative process—from your research or other research that you’re aware of—what are the success factors? What are the things which mean that that co-creation process is more likely to be fruitful than not? Jennifer: I think it starts really with competence. And I think this is something, in general, we see that generative AI just became extremely good at, right? They know, so to speak, a lot and tailor a lot of knowledge, and that is very, very helpful—because we need broad associations, coming from mostly different fields, and connect that to come up with something we consider new enough to call it creative. That is a benefit that is beyond human capabilities, right? What we see right now those tools are doing—that is one part. But that is not all. What you also need is the spark of: why would something need to be connected? And I think that is especially where raising the creative questions, coming up with the goal that you want to achieve something too, is still the human part. But—it doesn’t need to be. That’s all I’m saying. But still, it is. Ross: So, I mean, there are some—very crude workflows, as in, you get AI to ideate, then humans select from those, and then they add other ideas, or you get humans and then AI sort of combines, recombines. Are there any particular sequences or flows that seem to be more effective? Jennifer: It’s interesting. I think this is also an interesting question for human creative work alone, even without technology—like, how do you achieve the good stuff, right? And I think what you just described, for me, would be kind of like a traditional way of: oh, I have a need, or I have a want—like, I want to create something, or I want to solve something, or I need a solution for a certain problem. And I describe that, and I iterate a best solution, right? This is part of what we call the divergent thinking process. And then, at a certain point, you choose a specific solution—so you converge. But I think where we have mostly the more interesting creative output—for humans and now also especially with AI—is that you kind of reverse the process. So let’s assume you have a solution and you need to find issues for it. For example, you have an invention. I think—yeah, I think it was that there’s this story told about the Post-its, you know, the yellow Post-its. So they were kind of invented because someone came up with glue that does not stick at all—like, really bad glue. And they had this as the final product. Now it’s like, “Okay, where can you make use of it?” And then they came up with, “Oh, maybe, if you put it on paper, you can come up with these sticky notes that just glue enough.” So they hold on surfaces, but they don’t stick forever, so you can easily erase them. They’re very practical in our brainstorming work, for example. And this kind of reverse thinking process—it’s much more random. And for many people, it’s much more difficult to open up to all the possibilities that can be. What I’ve seen is that if you try to poke LLMs with such very diverse, open questions, it can be very interesting what kind of comes out there. Ross: Though, to your point, I mean, this is the way—the human frames, the AI can respond. But the human needs to frame—as in, “Here is a solution. What are ways to be able to apply?” Jennifer: And all the examples—like, what I’m thinking of right now—is what is working with the tools that we have with LLMs. And I think what you were asking me before about the fourth level that we described with this co-creation—these are tools that work a bit differently. These are tools that, for now, mostly exist in research because you still need a high level of computational knowledge. So, the work that I did—the colleagues that I work with—are from computer science or mathematicians who program tools that know some rules of the game, or some—let’s call them—boundary conditions of our creative problem that we are dealing with. And then the magic—or the black box magic—of AI is happening. And something comes out. And sometimes we don’t really understand what was going on there. We just see the results. And then, with such results, we can iterate. Or maybe something goes in the direction as we assume could be part of the solution. So it becomes this iterative process between an LLM or AI tool doing something, we’re seeing the results, saying yes or no, nudging it into different directions, and so, overall, coming up with a potentially proper solution. This is—at least in the examples that we see. And if you have such a process and look over it, like what was happening, often what we see is that LLMs or AI tools in general—with their, let’s call it, broad knowledge, or the very intense, broad computational capacities that they have—they do stuff differently than we as humans tend to do stuff. And this is where it becomes interesting, right? Because now we are not bounded in this common way of thinking and finding associations, or iterating smaller solutions. Now we have this interesting artificial entity that finds very different ways of solving complex problems—and we can make use of that. Of course, we can learn from that. Ross: Absolutely. And I think you’ve pointed to some examples in your papers. I mean—other, sort of, I suppose we’ve been quite conceptual—so examples that you can give of either what people have done, or projects you’ve been involved with, or just types of challenges? Jennifer: I think—to explain the mechanism that I’m talking about—I think the first creative, artificial example, like the real, considered properly creative example, was when AlphaGo, the program developed to play Go—the game similar to, or somewhat similar to, chess but not chess—when this tool was able to come up with moves, like play moves, which were very uncommon. Still within the realm of possibilities, but very, very uncommon to how humans used to play. And so, I think what this new was back in 2016, right? When this happened—when DeepMind, from Google, built this tool and kind of revolutionized AI research. What it showed us is exactly this mechanism of these tools. Although they are still within the realm of possibilities—still within what we consider the rules, right, of the game—it showed some moves which were totally uncommon and surprising. And I think this shows us that we humans often tend to be very restricted. Even when we are world champions in a game, we are still restricted to what we commonly do—what is considered a good rule of thumb for success. And I’m very optimistic that AI will surprise us, like in this direction—with this mechanism—quite a lot in the future. Ross: Yeah, and certainly, related to what you’re describing, some similar algorithms have been applied to drug discovery and so on. Part of it is the number-crunching, machine learning piece, but part of it is also being able to find novel ways of folding proteins or other combinations which humans might not have envisaged. Jennifer: Yeah, exactly. And exactly—it’s in part because these machines are just so much more advanced in how much, or how many, information they can hold and combine. This is, in part, purely computational. It’s a bit unfair to compare that to our limited brains. But it’s not just that. It’s not just pure information, right? It’s also how this information is worked upon, or the processes—how information is combined, etc. So I think there are different levels of how these machines can advance our thinking. Ross: So one of the themes you’ve written about is designing for synergies—how we can design so that we are able to be complementary, as opposed to just delegating or substituting with AI. So what are those design factors, or design patterns, or mentalities we need? Jennifer: Well, I will propose, first up—I think it’s extremely complicated. Not complicated, but it will become a huge issue. Because, let’s say, if technology becomes so good—and we see that right now already with LLMs like ChatGPT—it’s so easy for us. And I mean that in a very neutral way. But lazy humans as we are—I think we are inherently lazy—it’s really tough for us to keep motivated to think on our own, to some degree at least, and not have all the processes overtaken by AI. So, saying that, I think the most essential, most important part whenever we are working with LLMs is: we have to keep our motivation in the loop—and our thinking to some degree in the loop—within the process. And so, we need a design which engages us as humans. I think it’s easily seen right now with LLMs. When you need the first step in—like typing some kind of prompt, or even in a conversation—you have to initiate it, right? You have to come up with, maybe even, your creative task at first. And I think this will always be true, because we humans control technology by developing it, right? But even when you’re more on the user end—forcing us to be in the loop, and thinking it through, and controlling the output, etc.—is one part. But I think what it also needs, especially for the synergy, is for the technology to adapt to us—to serve us, so to speak. And I think this is an aspect that is a little bit underdeveloped right now. What do I mean by that? I want a tool that serves me in my thinking. It should be competent enough that I perceive it as a buddy—eye to eye. That is the vision that I have. But I still always want the control. And I want it to adapt to me, and that I don’t have to adapt too much to the tool. Right now, we’re mostly just provided with tools that we need to learn how to deal with. We need to understand how prompting works, etc., etc. And I want that reversed. I want tools which are competent enough to understand, “Okay, this is Jenny. She is socialized in this way. She usually speaks German,”—whatever kind of information would be important to get me involved and understand me better. I think this is the vision for synergy that I’m thinking of. Ross: No, I really like that. The idea of designing for engagement, because instead of saying, yeah, why is it going to make us want to be engaged and continue the process and want to want to be involved, as opposed to doing the hard work of telling the—keep on telling the AI to do stuff. Jennifer: Yes, and also sometimes—I mean, I work a lot with ChatGPT and other similar tools—and sometimes I’m like, I found myself, I hope I don’t spoil too much, but sometimes I find myself copy-pasting too much because there’s nothing left for me to do. And to some degree, it can happen that the tools are too good, right? Because they are meant to create the output as the output, but they are not meant to be part of this iterative thinking process. I think you can design it much better and easier to go hand in hand with what I’m thinking and what I want to advance. Maybe. Ross: Yeah, yes, otherwise the onus is on the human to do it all. So in one of your papers, you identify—you used a number of the different models, and I believe you found that GPT-4 was the best for a variety of ideation tasks. But you’ve also done some more recent research. I’d love to hear about strengths, weaknesses, or different domains in which the different models are good, or— Jennifer: Yeah, that’s quite interesting, right? Because—okay, so going back to the start of the big—let’s call it the big boom of LLMs, right? I think it was early ’23, right, when ChatGPT came around. End of ’22. Okay, so it took a while when it reached Germany—it was for us. No, just joking. But okay, so around this time, what we found was intense debates arguing that, although these tools are generative, they cannot be creative. And that was the stance held tightest—maybe especially from creativity researchers and mostly psychologists, right? As I mentioned before, it’s a little bit of this fear that too much is taken over by technology. I think that is a strong contributor—even among researchers. So what we went out to do is—we basically wanted to ask LLMs the same creativity measures as we would do for humans. Like, when you want to know if a person holds potential for creative thinking, you ask them creative questions, and they have to perform—if they want to. And that’s exactly what we did with LLMs. Back in the day, we did it with the LLMs that were easily reachable and free in the market—like ChatGPT. And now, we really redid it with the current LLMs, with the current versions. And—I don’t know if you’ve seen that—but most LLMs are advertised, when the new versions come out, usually they are advertised with: they are more competent, and they are more creative. And so we questioned that. Is that really true? Is ChatGPT 4.5, for example—the current version—is it more creative than 3.5 back in the day? And what we find is—it’s so messy, actually. Because for some tools, yes, they are a bit more creative than they used to be two years ago. But the picture is really not clear. You cannot really tell or say or argue that the current versions we are having are more creative than two years ago—or even more creative than humans. It’s been interesting. We’re not really sure why. But all we can say is that, on average, these tools are as good at coming up with everyday-like uses or everyday-like ideas for everyday problems. They are, on average, as good as humans—random humans picked from surveys. And I think that is good news, right? Because LLMs are easier to ask than random humans most of the time. But the promise that they become more and more creative with every new release, in our perspective, does not hold up. So that is the bigger, bigger picture. Let’s start there. Ross: So that’s very interesting. So this is using some of the classic psychological creativity tests. And so you’re applying what has for a long time been used for assessing creativity in humans, and simply applying exactly the same test to LLMs? Jennifer: And to be fair, within the creativity research community, we agree that those tests are not good. Okay, they’re really pragmatic. We totally agree on that, so we do not have to fight for this point. But it’s commonly what we use to assess human potential for creative thinking—or even more concise, for divergent thinking—which is only one important, but just one aspect, of the whole creative journey, let’s say. And it basically just asks how good you are, on the spot, at coming up with alternative uses for everyday products like a shoe or toothbrush or newspaper. And of course, you can come up with obvious uses. But then there are the creative ones, which are not so easy to think of, right? And LLMs are good at that. They will deliver a lot of ideas, and quite a few of those are considered original compared to human answers. We also now used another test, which is a little bit more arbitrary even, but it proved to be somewhat of a good predictor for creative performance overall. And that is: you are asked to come up with 10 words which are as different from each other as possible. So very pragmatic again. And these LLMs—as they, you know, know one thing, and that is language—are, again, quite good at that on average. But it’s not that you see that they are above average, or that a specific LLM would be above average. We see some variety, but the picture, I would say, is not too clear. And also, to mention—which was a little bit surprising to us, actually—is that those LLMs, we asked them several times, like, a lot of times, and the variance in terms of originality—the variance is quite huge. So if you ask an LLM like ChatGPT for creative ideas, sometimes you can have quite a creative output, and sometimes it’s just average. Ross: So you did say that you’re comparing them to random humans. So does that mean that generally perceived-to-be-creative humans are significantly outperforming the LLMs on these tasks? Jennifer: Yeah, yeah. So, but the thing is, there is usually no creative human per se. So there’s nothing about a human that makes a human per se creative. We tend to differ a little bit on how well we perform on such tasks. Yes, we do differ in our mental flexibility, let’s say. But a creative individual is usually an individual which found a very good fit between their thinking, their experience, and the kind of creative task they’re doing. And just think about it, because this creativity can be found in all sorts of domains, right? And people can be good or less good in those domains, and that correlates highly with the creativity. So when we ask about the general, like, the ideas for everyday tasks, there is not really the creative individual, right? They are motivated individuals, which makes a huge difference for creativity measures. If you’re motivated and engaged, that is something we take as granted. For LLMs, I guess if you compare them, the motivation is there. But what we see in terms of the best answers—the most original answers in our data sets—most of the time, not all, but most of the time, come from humans. Ross: Very interesting. So, this is the Amplifying Cognition podcast, so I want to sort of round up by asking: all right, so what’s the state of the nation or state of the world, and where we are moving in terms of being able to amplify and augment human cognition, human creativity? So I suppose that could be either just, improving human creativity, or collaborating, or, you know, this co-creativity. Jennifer: I think the potential for significant improvements and amplifications has never been better. But I think at the same time as I’m saying that, I think the risks have never been higher. And that is because, as I said, we are lazy people. That’s just what humanist means—and that is fine—but it also means that we have a great risk of using these technologies not for us, but being used by them, basically, right? So we can use ChatGPT and other tools to do the task for us, or we can use them to do the task more efficiently and better with them. I think this difference can be very gradual, very minor, but it makes the whole difference between success and big dependencies—and potentially failure. Ross: Yeah, and I think you make a point—which I often also do—which is over-reliance is the biggest risk of all, potentially. Where, if we start to just sort of say, “This is good, I’ll let the AI do the task, or the creativity, or whatever,” it’s dangerous on so many levels. Jennifer: Because it does good enough most of the time, right? Technology became so good for many tasks—not all, but many tasks—that it does it good enough. And I think that is exactly where we have the potential to become so much better, right? Because if you now take the time and effort that we usually would put into the task itself, we could just improve on all levels. And that is the potential I’m talking about. I think a lot is to be advanced, and a lot is to be gained—if we play it right. Ross: And so, what’s on your personal research agenda now? Jennifer: Oh, I fell into this agentic LLM hole. Yeah, no, no—it’s not just looking at individual LLMs, but to chain them and combine them into bigger, more complex systems to have—or work on—bigger and complex issues, mostly creative problems, and see where the thinking of me and the tool, yeah, excels, basically, right? And where do I, as a human, have to step in to fine-tune specific bits and pieces and really find the limits of this technology if you scale it up? That’s my agenda right now. Ross: I’m very much looking forward to reading the research as you publish it. Jennifer: Thank you. Ross: Is there anywhere people can go to find out more about your work? Jennifer: Yeah, I collect everything on jenniferhaase.com. That’s my web page. It’s hugely up to date there, and you can find talks and papers. Ross: Fabulous. Love the work you’re doing. Jennifer, thanks so much for being on the show and sharing. Jennifer: Thank you very much. It was—yeah, I love to talk about that, so thanks for inviting me. The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Amplifying Cognition .…
“We should not make technology so that we can be stupid. We should make technology so we can be even smarter… not just make the machine more intelligent, but enhance the overall intelligence—especially human intelligence.” –Pat Pataranutaporn About Pat Pataranutaporn Pat Pataranutaporn is Co-Director of MIT Media Lab’s new Advancing Humans with AI (AHA) research program, alongside Pattie Maes. In addition to extensive academic publications, his research has been featured in Scientific American, MIT Tech Review, Washington Post, Wall Street Journal, and other leading publications. His work has been named in TIME’s “Best Inventions” lists and Fast Company’s “World Changing Ideas.” Websites: MIT Media Lab AI (AHA) LinkedIn Profile: Pat Pataranutaporn What you will learn Reimagining ai as a tool for human flourishing Exploring the future you project and long-term thinking Boosting motivation through personalized ai learning Enhancing critical thinking with question-based ai prompts Designing agents that collaborate, not dominate Preventing collective intelligence from becoming uniform Launching aha to measure ai’s real impact on people Episode Resources People Hal Herschfeld Pattie Maes Elon Musk Organizations & Institutions MIT Media Lab KBTG ACM SIGCHI Center for Collective Intelligence Technical Terms & Concepts Human flourishing Human-AI interaction Digital twin Augmented reasoning Multi-agent systems Collective intelligence AI bias Socratic questioning Cognitive load Human general intelligence (HGI) Artificial general intelligence (AGI) Transcript Ross Dawson: Pat, it is wonderful to have you on the show. Pat Pataranutaporn: Thank you so much. It’s awesome to be here. Thanks for having me. Ross: There’s so much to dive into, but as a starting point: you focus on human flourishing with AI, exactly. So what does that mean? Paint the big picture of AI and how it can help us to flourish as who we are and our humanity. Pat: Yeah, that’s a great question. So I’m a researcher at MIT Media Lab. I’ve been working on human-AI interaction before it was cool—before ChatGPT took off, right? So we have been asking this question for a long time: when we focus on artificial intelligence, what does it mean for people? What does it mean for humanity? I think today, a lot of conversation is about how we can make models better, how we can make technology smarter and smarter. But does that mean that we can be stupid? Does it mean that we can just let the machine be the smart one and let it take over? That is not the vision that we have at MIT. We believe that technology should make humans better. So I think the idea of human flourishing is an umbrella term that we use to describe different areas where we think AI could enhance the human experience. For me in particular, I focus on three areas: how AI can enhance human wisdom, enhancing wonder, and well-being. So: 3 W’s—wisdom, wonder, and well-being. We work on many projects to look into these areas. For example, how AI could allow a person to talk to their future self, so that they can think in the longer term, to see that future more vividly. That’s about enhancing wonder and wisdom. We think a lot about how AI can help people think more critically and analyze information that they encounter on a daily basis in a more comprehensive way. And you know well-being, we have many projects that look at how AI can improve human mental health, positive thinking, and things like that. But at the end, we also focus on AI that doesn’t lead to human flourishing, to balance it out. We study in what contexts human-AI interaction leads to negative outcomes—like people becoming lonelier or experiencing negative outcomes such as false memories, misinformation, and things like that. As scientists, we’re not overly optimistic or pessimistic. We’re trying to understand what’s going on and how we can design a better future for everyone. That’s what we’re trying to focus on. Yeah? Ros: Fabulous. And as you say, there are many, many different projects and domains of research which you’re delving into. So I’d like to start to dive into some of those. One that you mentioned was the Future You project. So I’d love to hear about what that is, how you created it, and what the impact was on people being able to interact with their future selves. Pat: Totally. So, I mean, as I said, right, the idea of human flourishing is really exciting for us. And in order to flourish, like, you cannot think short term. You need to think long term and be able to sort of imagine: how would you get there, right? So as a kid, I was interested in sort of a time machine. Like, I loved dinosaurs. I wanted to go back into the past and also go into the future, see what would happen in the future, like the exciting future we might have. So I really love this idea of, like, having a time machine. And of course, we cannot do a real time machine yet, but we can make a simulation of a time machine that uses a person’s personal data and can extrapolate that, and use other data to kind of see, okay, if the person has this current behavior, things that they care about, what would happen down the road—like what would happen in the future. So we built an AI simulation that is a digital twin of a person. And we first ask people to kind of provide us with some basic information: their aspiration, things that they want to achieve in the future. And then we use the current behavior that they have to kind of create what we call a synthetic memory, or a memory that that person might have in the future, right? So normally, memory is something that you already experienced. But in this case, because we want to simulate the future self, we need to build memory that you did not experience yet but might actually experience in the future. So we use language model combined with the information that the person gives us to create this sort of intermediary representation of person experience, and then feed that into a model that then allows us to create human-like conversation. And then we also age the image of the person. So when the person uploads the image, we also use a visual model that can kind of create an older representation of that person. And then combine these together, we are creating an AI-simulated future self that people can have a conversation with. So we have been working with psychologists—Professor Hal Herschfeld from UCLA—who looks at the concept of future self-continuity, which is a psychological concept that measures how well a person can vividly imagine their future self. And he has shown that if you can increase this future self-continuity, people tend to have better mental health, better financial saving, better decision, because they can kind of think for the long term, right? So we did this experiment where we created this future self system and then tested it with people and compared it with a regular chatbot and having no intervention at all. And we have shown that this future self intervention can increase future self-continuity and also reduce people’s anxiety as well. So they become much more of a future thinker—not only think about today’s situation, but can see the possibility of the future and have better mental health overall. So I think this is really exciting for us, because we built a new type of system, but also really showed that it had a positive impact in the real world. Ross: What were the ranges of ages of people who were involved in this research? Pat: Yeah, so right now, the prototype that we developed is for younger population—people that just finished college or people that just finished high school, people that still need to think about what their future might look like, people that still would benefit from having ability to kind of think in the longer term. And right now, we actually have a public demo that everyone can use. So people can go to our website and then actually start to use it. You can also volunteer the data for research as well. So this is sort of in the wild, or in the real world study. That’s what we are doing right now. So if people like to volunteer the data, then we can also use the data to kind of do future research on this topic. But right now, the system has been used by people in over 190 countries, and we are really excited for this research to be in the real world and have people using it. Ross: Fabulous. We’ll have the link in the show notes. So, one of the other interesting aspects raised across your research is the potential positive impact of AI on motivation. I think that’s a really interesting point. Because, classically, if you think about the future of education, AI can have custom learning pathways and so on. But the role of the human teachers, of course, is to inspire and to motivate and to engage and so on. So I’d love to hear about how you’re using AI to develop people’s positive motivation. Pat: Yeah, that’s a really great question. And I totally agree with you that the role of the teacher is to inspire and create this sort of positive reinforcement or positive encouragement for the student, right? We are not trying to replace that. Our research is trying to see what kind of tools the teacher can use to improve student motivation, right? And I think today, a lot of people have been asking, like, well, we have AI that can do so many things—why do we need to learn, right? And we believe at MIT that learning is not just for the benefit of getting a job or for the benefit that you will have a good life, but it’s good for personal growth, and it’s also a fun process, right? Learning something allows you to feel excited about your life—like, oh, you can now do this, even though AI can do that. I mean, a car can also go from one place to another place, but that doesn’t mean we should stop walking, right? Or you can go to a restaurant and a professional chef can cook for you, but it’s also a very fun thing to cook at home, right? With your loved ones or with your family, right? So I think learning is a really important process of being human, and AI could make that process even more interesting and even more personal, right? We really emphasize a lot on the idea of personalized learning, which means that learning can be tailored to each individual. People are very different, right? We learn in different ways. We care about different things. And learning is also about connecting the dots—things that we already know and new things that we haven’t learned before. How do we connect that dot better? So we have built many AI systems that try to address these. The first project we looked at was what happens if we can create virtual characters that can work with teachers to help students learn new materials. They can be a guest lecturer, they could be a virtual tutor that students can interact with in addition to their real teacher, right? And we showed that by creating characters based on the people that students like and admire—like, at that time, I think people liked Elon Musk a lot (I don’t know about now; I think we would have a different story)—but at that time, Elon Musk was a hero to many people. So we showed that if you learn from virtual Elon Musk, people have a higher level of learning motivation, and they want to learn more advanced material compared to a generic AI. So personalization, in this case, really helped with enhancing personalized feeling and also learning motivation and positive learning experience. We have shown this across different educational measures. Another project we did was looking at examples, right? When you learn things, you want examples to help you understand the concept, right? Sometimes concepts can be very abstract, but when you have examples, that’s when you can start to connect it with the real world. Here we showed that if we use AI to create examples that resonate with the student’s interests—like if they love Harry Potter, or, I don’t know, like Kim Kardashian, or whatever—Minecraft or whatever things that people like these days, right? Well, I feel like an old person now, but yeah, things that people care about. If you create an example using elements that people care about, we can also make the lesson more accessible and exciting for people as well, right? So this is a way that AI could make learning more positive and more fun and engaging for students. Yeah. Ross: So one of the domains you’ve looked at is augmented reasoning. And so I think it’s a particularly interesting point now. In the last six months or so, we’ve all talked about reasoning models with large language models—or perhaps “reasoning” in quotation marks. And there are also studies that have shown in various guises that people do seem to be reducing their cognitive engagement sometimes, whether they’re overusing LLMs or using them in the wrong ways. So I’d love to hear about your research in how we can use AI to augment reasoning as well as critical thinking capabilities. Pat: That’s a great question. I mean, that’s going back to what I said, right? Like, what does it mean for humans to have smart models around us? Does it mean we can be stupid? I think that’s a degradation of humans, right? We should not make technology so that we can be stupid. We should make technology so we can be even smarter, right? So I think the end goal of having a machine or models that can do reasoning for us, rather than enhance our reasoning capability—I think that’s the wrong goal, right? And again, if you have the wrong outcome or the wrong measurement, you’re gonna get the wrong thing. So first of all, you need to align the goal in the right direction. That’s why, in my PhD research, I really want to focus on things that ultimately have positive impact on people. AI models continue to advance, but sometimes humans don’t advance with the AI models, right? So in this case, reasoning is something that’s very, very critical. You can trace it back to ancient Greek. Socrates talked a lot about the importance of questioning and asking the right question, and always using this critical thinking process—not trusting things at face value, right? We have been working on systems—again, the outcome of human-AI interaction can be influenced by both human behavior and AI behavior, right? So we can design AI systems that engage people in critical thinking rather than doing the critical thinking for them. That could be very dangerous, right? These systems right now don’t really have real reasoning capability. They’re doing simulated reasoning. And sometimes they get it right because, on the internet, people have already expressed reasoning and thinking processes. If you repeat that, you can get to the right answer. I mean, the internet is bigger than we imagined. I think that’s what the language models show us—that there’s always something on the internet that allows you to get to the right answer. You have powerful models that can learn those patterns, right? So these models are doing simulated reasoning, which means they don’t have real understanding. Many people have shown that right now—that even though these systems perform very well on benchmarks, in the real world they still fail, especially with things that are very unique and very critical, right? So in that case, the model, instead of doing the reasoning for us, could make us have better reasoning by teaching us the critical thinking process. And there are many processes for that. Many schools of thought. We have looked at two processes. One of them is in a project called Variable Reasoner. We made a wearable device—like wearable smart glasses—with an AI agent that runs the process of verifying statements that people listen to and identify and flag when the statement people listen to has no evidence to support, right? This is really, really important—especially if you love political speeches, or you love watching advertisements or TikTok. Because right now, social media is filled with statements that sound so convincing but have no evidence whatsoever. So this type of system can help flag that. Because, as humans, we tend to go—or we tend to follow along—if things sound reasonable, sound correct, sound persuasive, we tend to go with them. But things that sound persuasive or sound correct doesn’t mean it’s correct, right? It can use all sorts of heuristics and other fallacies to get you to fall into that trap. So our system—the AI—can be the system that follows things along and helps us flag that for us. We have shown that when people wear these glasses, when the AI helps them think through the statements they listen to, people tend to agree more with statements that are well-reasoned and have evidence to support, right? So we can show that we can nudge people to pay more attention to the evidence part of the information they encounter. That’s one project. Another project—we borrowed the technique from Socrates, the ancient Greek philosopher. We showed that if the AI doesn’t give the answer to people right away but rather asks a question back—it’s kind of counterintuitive, like, well, but people need to arrive at that information for themselves— We showed that when the AI asked questions, it improved people’s ability to discern true information from false information better than AI giving the correct answer. Which some people might ask: why is that the case? And I think it’s because people already have the ability. Many of us already have the ability to discern information. We are just being distracted by other things. So when the AI asks a question, it can help us focus on things that matter—especially if the AI frames the information in a way that makes us think, right? For example, if there is a statement like: “Video games lead to people becoming more violent,” and the evidence is “a gamer slapped another last week.” For example— If the AI starts to frame that into: “If one person stabs another person, does that mean that every gamer will become violent after playing video games?” And then you start to realize that, oh, now there’s an overgeneralization. You’re using the example of one to overgeneralize to everyone, right? If the AI frames the statement into a question like this, some people will be able to come up with the answer and discern for themselves. And this not only allows them to reach the right and correct answer but also strengthens their process as well, right? It’s kind of like AI creating or scaffolding our critical thinking so that our critical thinking muscle can be strengthened, right? So I think this is a really important area of research. And there are many more research coming out that show how we can design AI systems that enhance critical thinking rather than doing the critical thinking for us. Ross: So in a number of other domains, there’s been research which has showed that whilst in some contexts AI can produce superior cognition or better thinking abilities, when the AI is withdrawn, they revert back. So one of the things is not only using AI in the enhancement process, but post-AI—to actually enhance the norms. When you don’t have the AI, that you’re still able to enhance your critical thinking. So has that been demonstrated, or is that something you would look at? Pat: Yeah, that’s a really important question. We haven’t looked at a study in that sort of domain—what happens when people stop using the AI, or what happens when the AIs are being removed from people—but that’s something that is part of the research roadmap that we are doing. At MIT right now, there’s a new research effort called AHA. We want to create aha moments, but AHA also stands for Advancing Humans with AI. And the emphasis is on advancing humans, right? AI is the part that’s supposed to help humans advance. So the focus is on the humans. We have looked at different research areas. We’ve already been doing a lot of work in this, but we are creating this roadmap for what future AI researchers need to focus on—and this is part of it. This is the point that you just mentioned: the idea of looking at what happens when the AI is removed from the equation, or when people no longer have access to the technology. What happens to their cognitive process and their skills? That is a really important part that is part of our roadmap. And so, for the audience out there—this April 10 is when we are launching this AHA research program at MIT. We have a symposium that everyone can watch. It’s going to be streamed online on the MIT Media Lab website. You can go to aha.media.mit.edu , and see this symposium. The theme of this symposium is: Can we design AI for human flourishing? And we have great speakers from OpenAI, Microsoft. We have great thinkers like Geraldine, Tristan Harris, Sherry Turkle, Arianna Huffington, and many amazing people who are joining us to really ask this question. And hopefully, we hope that this kind of conversation will inspire the larger AI researchers and people in the industry to ask the important question of AI for human flourishing—not just AI for AI’s sake, or AI for technological advancement’s sake. Ross: Yeah, I’ve just looked at the agenda and the speakers—this is mind-boggling. Looks like an extraordinary conference, and I’m very much looking forward to seeing the impact that that has. So one of the other things I’m very interested in is this intersection of agents—AI agents, multi-agents—and collective intelligence. And as I often say, and you very much manifested in your work, this is not about multi-agent as a stack of different AI agents around. It’s saying, well, there are human agents, there are AI agents—so how can you pull these together to get a collective intelligence that manifests the best of both? A group of people and AI working together. So I’d love to hear about your directions and research in that space. Pat: Yeah, there is a lot of work that we are doing. And in fact, my PhD advisor, Professor Pattie Maes, is credited as one of the pioneers of software agents. And she is actually receiving the Lifetime Achievement Award in ACM SIGCHI, which is the special interest group in human-computer interaction—this is in a couple of months, actually. So it’s awesome and amazing that she’s being recognized as the pioneer of this field. But the question of agents, I think, is really interesting, because right now, the terminology is very broad. AI is a broad term. AGI is an even broader term. And “agent”—I don’t know what the definition is, right? I mean, some people argue that it’s a type of system that can take action on behalf of the user, so the user doesn’t need to supervise. This means doing things autonomously. But there are different degrees of autonomy—like things that may require human approval, or things that can just do things on their own. And it can be in the physical world, or the digital world, or in between, right? So the definition of agent is pretty broad. But I think, again, going back to the question of what is the human experience of interacting with this agent—are we losing our agency or the sense of ownership? We have many projects that look into and investigate that. For example, in one project, we design new form factors or new interaction paradigms for interacting with agents. This is a project we worked on with KBTG, which is one of the largest banks in Asia, where we’re trying to help people with financial decisions. If you ask a chatbot, you need to pass back and forth a lot of information—like you need a bank statement, or your savings, or all these accounts. A chatbot is not the right modality. You could have an AI agent that interacts with people in the task—like if you’re planning your financial spending, or investment, or whatever. The AI could be another hand or another pointer on screen. You have your pointer, right? But the AI can be another pointer, and then you can talk to that pointer, and you can feel like there are two agents interacting with one another. And we showed that—even just changing, using the same exact model—but changing the way that information is flowing and visualized to the user, and the way the user can interact with the agent, rather than going from one screen, then going to the chatbot, typing something, and then going back… Now, the agent has access to what the user is doing in real time. And because it’s another pointer, it can point and highlight things that are important at the moment to help steer the user toward things that are critical, or things they should pay attention to, right? We showed that this type of interaction reduces cognitive load and makes people actually enjoy the process even more. So I think the idea of an agent is not a system by itself. It’s also the interaction between human and agent—and how can we design it so that it feels like a collaborative, positive collaboration, rather than a delegation that feels like people are losing some agency and autonomy, right? So I think this is a really, really important question that we need to investigate. Yeah? Ross: Well, the thing is, it is a trust—a relationship of trust, essentially. So you and it. So there’s the nature of the interface between the human, who is essentially trusting an agent—an agent to act on their behalf—and they’re able to do things well, that they’re able to represent them well, that they check nothing’s missed. And so this requires a rich—essentially, in a way—emotional interface between the two. I think that’s a key part of that when we move into multi-agent systems, where you have multiple agents, each with their defined roles or capabilities, interacting. This comes, of course—MIT also has a Center for Collective Intelligence. I mean, I’d love to sort of wonder what the intersections between your work and the Center for Collective Intelligence might be. Pat: Well, one thing that I think both of our research groups focus on is the idea of intelligence not as things that already happen in technologies, but things that happen collectively—at the societal level, or at the collective level. I think that should be the ultimate goal of whatever we do, right? You should not just make the machine more intelligent, but how do we enhance the overall intelligence? And I think the question also is: how do we diversify human intelligence as well, right? Because you can be intelligent in a narrow area, but in the real world, problems are very complex. You don’t want everyone to think in the same way. I mean, there are studies showing that on the individual level, AI can make people’s essays better. But if you look across different essays written by people assisted by AI, they start to look the same—which means that there is an individual gain, but a collective loss, right? And I think that’s a big problem, right? Because now everyone is thinking in the same way. Well, maybe everyone is a little bit better, but if they’re all the same, then we have no diverse solution to the bigger problems. So in one project that we looked into is how do we use AI that has the opposite value as a person—to help make people think more diversely. If you like something, the AI could like the other thing, and then make the idea something in between. Or, if you are so deep into one thing, the AI could represent the broader type of intelligence that gets you out of your depth, basically. Or, if you are very broad, maybe the AI will go in deep in one direction—so complementing your intelligence in a way. And we have shown that this type of AI system can really drive collaboration in a direction that is very diverse—very different from the user. But at the same time, if you have an AI that is similar to the person—like has the same value, same type of intelligence—it can make them go even deeper. In the sense that if you have a bias toward a certain topic, and the AI also has a bias in the same topic as you, it can make that go even further. So again, it’s really about the interaction—and what type of intelligence do we want our people to interact with? And what are the outcomes that we care about, whether it’s individual or collective? I think these are design choices that need to be studied and evaluated empirically. Yeah. Ross: That’s fantastic. I mean, I have a very deep belief in human uniqueness. I think we’re all far more unique than almost anybody realizes. And society basically makes us look and makes us more the same. So AI is perhaps a far stronger force in sort of pulling us together—society already is that, yeah. But I mean, to that point of saying, well, I may have a unique way of thinking, or just unique perspectives—and so, I mean, you’re talking about things where we can actually draw out and amplify and augment what it is that is most unique and individual about each of us. Pat: Right, totally. And I mean, I think the former CEO of Google, right, he has said at one point that, why would an individual—why would a person—want to talk to another person when you can talk to an AI that is 100,000 million people at the same time, right? But I feel like that’s a boring thing. Because the AI could take on any direction. It doesn’t have an opinion of its own, right? But because a human is limited to our own life experience until that point, it gives us a unique perspective, right? When things are everything, everywhere, all at once, it’s like generic and has no perspective of its own. I think each individual person—whether it’s the things they’re living through, things that influence their life, things they grew up with—has that sort of story that made them unique. I think that’s more— to me, that is more interesting, and I think it’s what we should preserve, not try to make everything average out. So for me, this is the thing we should amplify. And again, I talk a lot about human-AI interaction, because I feel like the interaction is the key—not just the model capability, but how it interacts with people. What features, what modality it actually uses to communicate with people. And I think this question of interaction is so interdisciplinary. You need to learn a lot about human behavior, psychology, AI engineering, system design, and all of that, right? So I think that’s the most exciting field to be. Ross: Yeah, It’s fantastic. So in the years to come, what do you find most exciting about what the Augmenting Humans with AI group could do? Pat: Well, I mean, many big ideas or aha moments that we want to create—definitely. We have actually an exciting project announcing tomorrow with one of the largest AI organizations or companies in the world. So please watch out for that. There’s new, exciting research in that direction, happening at scale. So there’s a big project that’s launching tomorrow, which is March 21. So if this is after that, yeah. I think one thing that we are working on is—we’re collaborating with many organizations, trying to focus and make them not just think about AGI, but think about HGI: Human General Intelligence. You know, what would happen to human general intelligence? We want everyone to flourish—not machines to flourish. We want people to flourish, right? To kind of steer many of the organizations, many of the AI companies, into thinking this way. And in order to do that, we first need a new type of benchmark, right? We have a lot of benchmarks on AI capabilities, but we don’t have any benchmarks on what happens to people after using the AI, right? So we need new benchmarks that can really show if the AI makes people depressed, empowers, or enhances these human qualities—these human experiences. We need to design new ways to measure that, especially when they’re using the AI. Second, we need to create an observatory that allows us to observe how people are evolving—or co-evolving—with AI around the world. Because AI affects different groups of people differently, right? We had a study showing that—this is kind of funny—but people talk about AI bias, that it’s biased toward certain genders, ethnicities, and so on. We did a study showing that, if you remove all the factors, just by the name of people, the AI will have a bias based on the name—or just the last name, right? If you have a famous last name, like Trump or Musk, the AI tends to favor those people more than people who have a generic or regular last name. And this is kind of crazy to me, because you can get rid of all the demographic information that we say causes bias, and just the name of a person already can lead to that bias. So we know that AI affects people differently. We need to design this type of observatory that we will deploy around the world to measure the impact of AI on people over time—and whether that leads to human flourishing or makes things worse. We don’t have empirical evidence for that right now. People are in two camps: the optimistic camp, saying AI is going to bring prosperity, we don’t need to care, we don’t need to regulate. And another group saying AI is going to be the worst thing—existential crisis, human extinction. We need to regulate and kill and stop. But we don’t have real scientific empirical evidence on humans at scale. So that’s another thing that MIT’s Advancing Human-AI Interaction is going to do. We’re going to try to establish this observatory so that we can inform people with scientific evidence. And finally, what I think is the most exciting thing: right now, we have so many papers published on AI—more than any human can read, maybe more than any AI can be trained on. Because every minute there’s a new paper being published, right? And people are not knowing what is going on. Maybe they know a little bit about their area, or maybe some papers become very famous, but we want to design an Atlas of Human-AI Interaction—a new type of AI for science that allows us to piece together different research papers that come out so that we have a comprehensive view of what is being researched. What are we over-researching right now? We had a preliminary version of this Atlas, and we showed that people right now do a lot of research on trust and explanation—but less so on other aspects, like loneliness. For example, that AI chatbots might make people lonely—very little research has gone into that. So we have this engine that’s always running. When new papers are being published, the knowledge is put into this knowledge tree. So we see what areas are growing, what areas are not growing, every day. And we see this evolve as the research field evolves. Then I think we will be able to have a better comprehension of when AI leads to human flourishing—or when it doesn’t—and see what is being researched, what is being developed, in real time. So these are the three moonshot ideas that we care about right now at MIT Media Lab. Yeah. Ross Dawson: Fantastic. I love your work—both you and all of your colleagues. This is so important. I’m very grateful for what you’re doing, and thanks so much for sharing your work on The Amplifying Cognition Show . Pat Pataranutaporn: Thank you so much. And I’m glad that you are doing this show to help people think more about this idea of amplifying human cognition. I think that’s an important question and an important challenge for this century and the future century as well. So thank you for having me. Bye. The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition .…
“We wanted to see what the effect of AI might be on forecasting accuracy… to our surprise, we find that even when the model gives biased or noisy advice, human forecasters still improve—something we didn’t expect.” – Philipp Schoenegger “I kind of call these Gen AI systems a mirror. Pose it a question, play with scenarios, and see what comes out. It’s like an accelerant for thinking—pushing the boundaries of what’s possible.” – Nikolas Badminton “Future thinking is an everyday practice. It’s about becoming more aware of what’s happening around us, sensing signals, and collectively imagining what’s next.” – Sylvia Gallusser “The question of the future isn’t ‘How creative are you?’ but ‘How are you creative?’ Because what we can imagine, we can create—and we have a responsibility to build a better future.” – Jack Uldrich About Philipp Schoenegger, Nikolas Badminton, Sylvia Gallusser, & Jack Uldrich Philipp Schoenegger is a researcher at London School of Economics working at the intersection of judgement, decision-making, and applied artificial intelligence. He is also a professional forecaster, working as a forecasting consultant for the Swift Centre as well as a ‘Pro Forecaster’ for Metaculus, providing probabilistic forecasts and detailed rationales for a variety of major organizations. Nikolas Badminton is the Chief Futurist of the Futurist Think Tank. He is a world-renowned futurist speaker, award-winning author, and executive advisor, with clients including Disney, Google, J.P. Morgan, Microsoft, NASA, and many other leading companies. He is author of Facing Our Futures and host of the Exponential Minds podcast. Sylvia Gallusser is Founder and CEO of Silicon Humanism, a futures thinking and strategic foresight consultancy. Previous roles include a variety of strategic roles at Accenture, Head of Technology at Business France North America, General Manager at French Tech Hub, and Co-founder at big bang factory. She is also a frequent keynote speaker and author of speculative fiction. Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual. Websites: Nikolas Badminton Nikolas Badminton Sylvia Gallusser Jack Uldrich University Profile: Philipp Schoenegger LinkedIn Profile: Philipp Schoenegger Nikolas Badminton Sylvia Gallusser Jack Uldrich What you will learn How AI-augmented predictions enhance human forecasting The surprising impact of biased AI advice on accuracy Why generative AI acts as a mirror for future thinking The role of signal scanning in spotting emerging trends How creativity and imagination shape the future The evolving nature of community in an AI-driven world Why unlearning is key to adapting in a changing era Episode Resources People Philip Tetlock Jonas Salk Books & Publications Superforecasting Facing Our Futures Technical Terms & Concepts AI-augmented predictions Large language models (LLMs) The Ten Commandments of Forecasting The Ten Commandments of Superforecasting Forecasting accuracy Signal scanning Scenario planning Foresight strategy Generative AI Base rate Bias in AI Cognitive augmentation Transcript Ross Dawson: Now, it’s wonderful to see the work which you’re doing. Speaking of which, recently, you were the lead author of a paper, AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy. So first of all, perhaps just describe the paper at a high level, and then we can dig into some of the specifics. Philipp Schoenegger: Yeah. So the basic idea of this paper is: how can we improve human forecasting? Human judgmental forecasting is basically the idea that you can query a bunch of very interested and sometimes laypeople about future events and then aggregate their predictions to arrive at surprisingly accurate estimations of future outcomes. This goes back to work on Superforecasting by Philip Tetlock, and there are a lot of different approaches on how one might go about improving human prediction capabilities. There might be some training—it was called The Ten Commandments of Forecasting —on how you can be a better forecaster. Or there might be some conversations where different forecasters talk to each other and exchange their views. And we want to look at how we can—how we could—think about improving human forecasting with AI. I think one of the main strengths of the current generation of large language models is the interactive nature of the back and forth, having a highly competent model that people can interact with and query whenever they want really. They might ask the model, “Please help me on this question. What’s the answer?” They might also just say, “Here’s what I think. Please critique it. And so this opens up for human forecasters a whole host of different interactions, and we wanted to see what the effect of this might be on forecasting accuracy. Ross: So that’s fascinating. I suppose one of the starting points is thinking about these forecasters. So I suppose, just so people can be clear, human forecasting in complex domains is superior to AI forecasting because they don’t have those capabilities. So now you’re saying humans are better than AI alone, but now the results of the paper suggest that humans augmented by AI are superior to either humans alone or AI alone. Philipp: At the current ammount of papers that I have published, yes, but depending on when this airs, there might be another paper coming out that adds another twist to this. But yes, in early work, we find that just a simple GPT-4 forecaster underperforms a human crowd, and on top of that, it underperforms just seeing 50% of every question. But in this paper, we find that if we give people the opportunity to interact with a large language model, which in this case was GPT-4 Turbo, and we prompted it specifically to provide super forecasting. So our main treatment had a prompt that explained The Ten Commandments of Superforecasting and instructed the model to provide estimates that take care of the base rate. So you look at how often things like this have typically happened, quantify uncertainty, and identify branch points in reasoning. But then we also looked at what happens if the large language model doesn’t give good advice. What if it gives what we call biased advice? It might be more noisy advice. So what if the model is told to not think about the base rate—not think about how often things like this happen—to be overconfident, to basically give very high or very low estimates, and be very confident? And to our surprise, we find that actually, these two approaches similarly effectively improve forecasting accuracy, which is not what we expected. Ross: So I think that this is a really interesting point because, essentially, this is about human cognition. It is human cognition taking very complex domains and coming up with a forecast of a probability of an event or a specific outcome in a defined timeframe. So in this case, the interaction with the AI is a way of enhancing human cognition—they are basically making better sense of the world. And I guess one of the things that is more distinctive about your approach is, as you say, you could allow them to use anything, any ways of interacting, as opposed to a specific dynamic. So in this case, it was all human-directed. There was no AI direction. It is AI as a tool, with humans, I suppose, seeking to augment their own ways of thinking about this challenge. Philipp: Yes, that’s right. And, of course, being human, the vast majority—at least a sizable amount—of participants simply asked the model a question, right? They just said, “Well, what’s the question? What would be the closing value for the Dow Jones at the end of December?” and they just copied it in and saw what the model did. But then many others did not, and they had their own view. They typed in, “Well, I think that’s the answer. What do you think?” or “Please critique this.” And I think these kinds of interactions are especially promising going forward because there’s also this whole literature on the different impact of AI augmentation on differently skilled participants, differently skilled workers. In my understanding, the literature is currently mixed, with studies finding different results. We didn’t find a specific effect here, but other work finds that when the model just gives the answer, low performers typically tend to do better because, you know, they take a lot from the answer, and the model is probably better than them. But if the model is instructed to give guidance only, low performers tend to not be able to pick up on the guidance and follow it. But I think there is still a lot of interesting work to be done before we can pin this down because there’s so much diversity in which models are being used. Nikolas Badminton: I do a lot of research on, with every key now, I into a ton of clients. You know, on the client side, I go into the industry. I call people in the industry. I read a ton of academic research behind the industry—stuff on the edge academically, as well as sort of what’s in the mainstream and what’s being done. And also, you know, those sort of edge players. When I start to move forward and start to create some new thoughts, then I can sort of start to play around with scenarios. And this is what’s become really interesting to me. I know that you talk a lot about the augmentation of capability through the use of things like generative AI and the such like. This has been something that I’ve been playing with quite a lot—not only from the generation of textual content but also the exploration from a visual perspective as a helping mechanism to take us in whole new directions as well. I mean, in my work, it’s like signals to trends, to scenarios, and to stories. I’ve really been trying to push the boundaries of what scenario exploration is with platforms like ChatGPT, Claude, and Gemini, and starting to see what we can do to look at positive and dystopian scenarios, which was obviously part of the work that I was doing, a part in Facing Our Futures. Over the last couple of years, since that book was completed, zero Gen AI sort of help, as it was in my book. And actually, very little Gen AI help is going to be in my next book because, contractually, you’re not allowed to do this. So what we have—what we can do—is start to explore the mirror. I kind of call these Gen AI systems a mirror. Pose it a question. Pose it some scenarios. Try to work out and see what comes out of it. And generally, what I find is maybe I’m talking about energy and ecological ecosystems, and I’ll pose a question, “What if renewable energy is pushed to the side, green initiatives are canceled, and we go full tilt into a maximalist fossil fuel society?” In preparation for this chat, I went into that to delve even deeper into the mechanisms behind that. And it’s sort of interesting—you get this mirror of like, “Oh yeah, I kind of expect that, you know, the answers to come from that.” Okay, let’s push that out to 2050. Yeah, it’s kind of an accelerant and whatever. It’s kind of interesting when you start to think about the reference points of all these systems and where they’re getting it from. Where something like Claude and ChatGPT actually feels like they’ve been drinking from the same fountain, and Gemini just seems to be a little bit freaky. So it’s super interesting. As I went into it, it was like poetic and dystopic. For example, I asked this: “Describe a world in 2100 where environmentally friendly, non-carbon fuel solutions are discarded.” And I went on and on in a prompt, very directional. The others would be like, “Here’s a list of things that happen”—very cold. I didn’t ask it to write in a particular style of a publication or anything like that. And then Gemini just came out with this. And this is fabulous: “The year is 2100. The gamble on renewables failed spectacularly. Big Oil, whispering sweet nothings of energy independence and economic growth, won the hearts and minds of a desperate world. The result? A planet drowning in its own fumes.” And I kind of love that poetic nature. Gemini, I think, is sort of the unsung hero a little bit, right? In the scheme of things, suddenly, we’re getting something interesting that starts to talk about the geopolitical chessboard, tech on steroids, violence, and exodus. And it’s like—whoa. Ross: A lot of it, I think, is about sensitizing ourselves to signals so that we are more likely to notice the things that are relevant or important or point to things that might change in the future. And that’s what futurists do. But how can we, I suppose, convey this as a capability or skill that others can learn and develop—that’ll been able to see and sense signals that, you know, point to change? Sylvia Gallusser: It’s a very interesting thing with signals. It’s like raw material. It’s something that anybody can apprehend, and that’s what makes future thinking something that really anybody can work with and develop as a personal skill. Because it’s about becoming more aware of what is going on around us. And that’s why I think it works really in tandem, in deal with the first step, which is about knowing always more, understanding always more about what is the long-term landscaping, and then being more aware of the variation. And this can go from analyzing behaviors of people around you—like, what changed during the pandemic? Were people more polite, more civilized? Did we see new behaviors, new words? Maybe also studying popular culture is a very interesting aspect because if you see what is going on in the media—TV series, movies, books—you also sense a lot of what people are attracted to. What new changes are starting when there’s this kind of enthusiasm for a new book; sometimes, that means something. So how can you get more aware of this? It’s really an everyday practice, and I like to say two things: it’s a personal practice, and it’s a collective practice. That’s something you can really train yourself to do all the time—just reading the news, being aware of what is around you, just having your sensors open to the world around. And once again, it’s all senses. It’s about listening. It’s about observing people around you. It’s a different taste in the air. It’s really multi-sensitive here. Why I say it’s also collective is that, you know, the futurist community is very active. It’s not that big; it’s small. But it’s very interconnected. And there are a lot of platforms to be able to exchange around signals. They call it sometimes signal swarming or signal scanning —you have different names for it—but the idea is that futurists love to exchange around that topic, to meet and say, “Hey, this week, what did you notice?” And once again, this STEEPLE aspect is interesting because when you’re on your own, coming maybe from one industry or one profession, maybe you’re a kind of a bias around one or the other. Like, I’m coming from technology, so at first, I would really focus on everything around new technology and so on. But I guess someone who’s a psychologist might have a different opinion. An economist might see things differently. So coming together as a collective, as a community, is really interesting into enhancing and amplifying the way you connect with those signals around you. And finally, I would say, on top of it being collective, what’s interesting when you want to bring a group, a population, a company, or a corporation to work around future thinking is to build the capability to do this. It’s very simple. It can start with just an Excel file. It doesn’t need something very fancy. But just bring people to come to see what signals are and get them to understand the texture of it—how does it look like? How does it sound like? And they start to log on their own signals. And then you already have a big bases of signals of change in a corporation. A great first way to enter the field of foresight. Ross: So one of the other things you were talking about was putting yourself in the scenario. And I suppose part of the practice is to create a useful scenario that thus helps you think about new things or envisage things that help shape your current actions. But as individuals, what are ways in which we can, I suppose, conceive of and bring ourselves—or enter into—I think you used the word meditation there. And, you know, I’d love to hear about that. What is that practice? How do we put ourselves, immerse ourselves in these useful future scenarios? Sylvia: Absolutely. Once again, you know, it can be very personal and intimate, or it can be something more collective. So I try to address both aspects because I think they can work really well together. You can develop your own future-thinking practice as an everyday discipline, let’s say. I wrote a few years ago, an article about mental stretching exercises you can practice to work on that. It can go from dealing with different perspectives, trying to develop empathy, putting yourself in the shoes of someone else, and imagining a story. You know what? Actually, learning new languages and learning new cultures is also a great way to practice this perspective change and teasing things in different ways. Reading, listening, and learning about fiction, for me, has been an immense way to stretch myself to see futures that are possible and not necessarily dystopian. That’s why I love to talk about science fiction, because we tend to think, to see science fiction as something very dystopian and very scary and not necessarily the good way to start for people who are scared about the future. But I would say there are more and more interesting science fiction now that create a future world that is not necessarily negative. They can be really engaging and develop a plot which has a narration where the problems are, but it doesn’t mean that the negative aspect is the world-building. Like the story, to be interesting, needs to always have something of a dilemma or something of a complexity or a knot to it. But it can be interpersonal stories, not necessarily in the world-building around it. So I think science fiction and future fiction really offer us ways to think about the future. So, for example, the way we do it collectively with groups, and I was talking about those meditative exercises. A really great way we’ve been doing it in the past was around the future of the home. Because during the pandemic, the home evolved dramatically, and not just the structure but also the way we reorganized life within it. And I like to talk about the structures and the intangibles that happen in the home. So what we would do, for example, in terms of envisioning meditations with a few groups, was really you waking up in the future home you live in—maybe 10 years from now, 20 years from now. How do you wake up? What is the first trigger? What happens? Is it a wake-up call? Is it natural lighting? Do you still live in a bedroom? Like, we really start just—what do you smell? What do you think? What do you feel? How does it sound? So five senses meditation is really effective. Changing perspective, as I was saying, and so on. So these are different tools we would use to bring people to get into that state of the future and then go throughout a day in the life. Like, okay, what do you do from your bed? Then do you go to breakfast? Do you go to your bathroom? How does the bathroom look? Is it interactive? Do you live alone? Do you live with other people in a community? And just—it starts asking so many questions that people naturally get their minds to wander around the future home. And that was a really great tool to get a sense of that new type of space that could exist. And, oh, they would like that home to be. Because, once again, it is also about developing what would be our preferable future, our favorite futures, and building them. Jack Uldrich: And I’ve spent a lot of time as a futurist with the concept of unlearning. It’s that people in organizations—it’s not that they can’t understand the future is going to change. What we have a really difficult time doing is letting go of the way we’ve always done things. And so I think when we’re talking about the future of work, to me, work does give most humans this intrinsic value, and they feel as though they’re an integral part of a community. And so I think there will always be this innate need to be doing something—not just for yourself but on behalf of something bigger. And when I say bigger, typically I’m thinking of community. You just want to do something for, of course, yourself, your immediate family, but then your neighborhood and your community. And so as I think about the long-term future, one of the things I’m really excited about is—first, I’m going to go dark, but I think there’s going to be a bright side to this. One of the things that I think is happening right now that’s not getting enough attention, as a futurist, is that the internet is breaking. In the sense that there’s so much misinformation and disinformation out there that we can no longer trust our eyes and our ears in this world of artificial intelligence. And I think that’s going to become increasingly murkier, and it’s going to be really destabilizing to a lot of people and organizations. So what’s the one thing we still can trust? What’s small groups that are right in front of us? And so I think one of the things we’re going to see in a future of AI is an increased importance on small communities. There’s some really compelling science that says the most cohesive units are about 150 people in size. And this is true in the military, educational units, and other things like that. And I think that we might start seeing that, but it’s going to look different than in the past. Like, I’m not suggesting that we’re all going to look like Amish communities here in the U.S., where we’re saying no to technology and doing things the old-fashioned way. But the new communities of the future are—and now I’m just thinking out loud—something I want to spend more time thinking about. Like, what will that look like? What will the roles and the skills be needed in this new future? And again, I don’t have any answers right now, just more questions and thinking. But it’s one of these scenarios I could see playing out that might catch a lot of people by surprise. Ross: Yeah, very much so. I mean, we are a community-based species, and the nature of community has changed from what it was. And I think, you know, thinking about the future of humanity, I think a future of community and how that evolves is actually a very useful frame to round out. Jack, what advice can you share with our listeners on how to think about the future? I suppose you did a little at the beginning. But, I mean, do you have any concluding thoughts on how people can usefully think about the extraordinary change in the world today? Jack: Yeah, the first thing I would say is this—and I was just doing a short video on this. Ever since we’ve been in grade school, most of us have been asked the question or graded on the question of How creative are you? And if you ask most people, like on a scale of one to ten, to just answer that question, they’ll do it. But you know what I always tell people? That’s a bad question. The question of the future isn’t How creative are you? It is How are you creative? Each and every one of us is creative in our own way. And as a futurist, I take that really seriously. We do have the ability to create our own future, but we first have to understand that we are creative, and most people don’t think of themselves that way. So how do you nurture creativity? And this is where I’m trying to spend a lot of my time as a futurist. This is where the ideas of unlearning and humility come in. But I would say it starts with curiosity and questions, and that’s why I like getting out under the night stars and just being reminded of how little I actually know. But then, it’s in that space of curiosity that imagination begins to flow. And there’s this wonderful quote from Einstein—most people would say he was one of the more brilliant minds of the 20th century. He said, Imagination is more important than knowledge. Like, why did Einstein, this great scientist, say that? And I think—and I don’t have proof of this—that everything around us today was first imagined into existence. It was imagined into existence by the human mind. The very first tool. The very first farm implement. And then farming as an industry, and then civilizations and cities and commerce and democracy and communism. They were all imagined first into existence. And so, what we can imagine, we can, in fact, create. And that’s why I’m still optimistic as a futurist—this idea that we’re not passive agents, that we can create a future. And I just like to remind people that our future can, in fact, be incredibly fucking bright. The idea that we can have cleaner water and sustainable energy and affordable housing and better education and preventive health care. We can address inequality. We can address these issues. People just have to be reminded of this. And so, at the end of the day, that’s why I get fired up, and I don’t think I’ll ever sort of lose the title of futurist, because until my last breath, I’m going to be, hopefully, reminding people that we can create—and we have a responsibility to create—a better future. Let me just end on this. I think the best question we can ask ourselves right now comes from Jonas Salk, the inventor of the polio vaccine. And he said, Are we being good ancestors? And I think the answer right now is, we’re not. But we still have the ability to be better ancestors. And maybe if I could just say one last thing—I also spend a lot of time helping people just embrace ambiguity and paradox. And here’s the truth: the world is getting worse. In terms of climate change, the rise of authoritarianism, inequality—you could say things are going bad. But at the same time, on the other hand, you could say the world is getting demonstrably better. It has never been a better time to be alive as a human. The likelihood that you’re going to die of starvation or war or not be able to read—never been lower. So the world is also getting better. But the operative question becomes: How can we make the world even better? And that’s where we have to spend our time. And that’s why we need creativity, curiosity, and imagination—to create that better future. The post Amplifying Foresight Compilation (AC Ep81) appeared first on Amplifying Cognition .…
“AI can make the process of sensing for signals much faster and much more efficient. You can think of it as a supplement to our brain. It can sort through massive amounts of data, track the latest developments, and flash alerts when something important emerges.” – Rita McGrath “What I found surprising in our exercises was how disruptive AI was. At first, I thought they would hate it, but they actually liked it. It made them stop and think because it forced them to break out of their usual patterns and consider ideas they wouldn’t have consciously introduced into the discussion.” – Christian Stadler “AI can accelerate the foresight process. It can help generate diverse perspectives, identify second-degree impacts, and uncover biases we might not notice. Of course, human critical thinking is still essential—we shouldn’t accept AI outputs as absolute truth, but rather use them as a starting point.” – Valentina Contini “One key area where AI excels is handling cognitive complexity. Humans struggle to hold thousands of variables in their heads, but AI can process vast amounts of interconnected data. The challenge is designing interfaces that allow humans to interact with this complexity in an intuitive way.” – Anthea Roberts About Rita McGrath, Christian Stadler, Valentina Contini, & Anthea Roberts Rita McGrath is one of the world’s top experts on strategy and innovation. She is consistently ranked among the top 10 management thinkers globally and has earned the #1 award for strategy by Thinkers 50. She is Professor of Strategy at Columbia Business School, and Founder of the Rita McGrath Group and Valize LLC. Her books include The End of Competitive Advantage and Seeing Around Corners . Christian Stadler is a professor of strategic management at Warwick Business School. He is author of Open Strategy, which was named as a Best Business Book by Financial Times and Strategy + Business and has been translated into 11 languages. His work has been featured in Harvard Business Review, New York Times, Wall Street Journal, CNN, BBC, and Al Jazeera, among others. Valentina Contini is an innovation strategist for a global IT services firm, a technofuturist, and speaker. She has a background in engineering, innovation design, AI-powered foresight, and biohacking. Her previous work includes founding the Innovation Lab at Porsche. Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization , was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous presitigious awards and has been named “the world’s leading international law scholar” by the League of Scholars. Websites: Rita McGrath Rita McGrath Christian Stadler Valentina Contini Anthea Roberts Anthea Roberts University Profile: Rita McGrath Christian Stadler Anthea Roberts LinkedIn Profile: Rita McGrath Christian Stadler Valentina Contini Anthea Robert What you will learn Bridging human cognition and AI for better decision-making How AI disrupts traditional boardroom dynamics Enhancing foresight with AI-driven scenario planning The role of AI in sense-making and strategic insights Why AI-generated variety outperforms human creativity Managing cognitive complexity with AI augmentation The evolving partnership between humans and AI in strategy Episode Resources Companies & Organizations Wrigley ChatGPT OpenAI Technical Terms & AI-Related Artificial Intelligence (AI) Large Language Models (LLMs) Generative AI Cognitive Complexity Metacognition Strategic Foresight Decision-Making Frameworks Transcript Ross Dawson: One of the key themes is strategy. How do we do strategy in a world that is accelerating, with all these overlay themes? There are, as you say, 10x shifts in many dimensions of work. This brings us to human capabilities. Humans have limited, finite cognition, even though we have extraordinary capabilities far transcending anything else. Now, we have AI to augment, support, or complement us. I’d like to dive in deep, but just to start—what is your framing around human capabilities in strategic thinking today, and how they are complemented by AI? Rita McGrath: Sure. Well, as I mentioned, human brains think in linear terms. We think immediately in terms of getting from here to there to avoid a predator. Back in the day when we were evolving, that worked pretty well. But we don’t do very well with exponential systems because they look small, and they look small, and they go small—until suddenly they don’t. It’s the whole “gradually, then suddenly” idea. What I argue is that you need to supplement what your brain can manage on its own. This is where I think AI comes in. What I’ve set up with companies is a series of what I call “time zero events,” which signal that a future inflection point has arrived. We don’t know exactly when, but we work backward and ask, “Before that happens, what would have to be the preceding situations?” AI can make that process of sensing for signals much faster and much more efficient. You can think of it as a supplement to our brain. It can sort through massive amounts of data, track the latest developments, and flash alerts when something important emerges. This allows us to blend human imagination—something AI is not very good at—with AI’s ability to crunch massive amounts of data. That’s where I think AI will have a lot of power in strategy. Ross: One of the core themes of my work, and I think yours as well, is sense-making. We have vast amounts of information out there. As strategists, we need to take in that information, make sense of it, and make effective decisions as a result. How can AI support our ability to comprehend how the world is working so that we can make better decisions? Rita: AI is really good at taking large amounts of information and breaking it into digestible chunks. Humanity has limits to how much information it can process. There’s actually a whole line of theory on this, which states that search, in the traditional sense, is not costless. Theoretically, a rational human being would entertain every possible combination of possibilities, create decision criteria, and then select the best option. But humans have cognitive limits, whereas machines have far fewer. Properly instructed, AI can present us with different pictures of the world. Another thing humans aren’t very good at is generating variety. Think of those old creativity exercises where someone asks you to come up with as many uses as possible for a paperclip. People start with obvious answers: “It can hold papers together,” “It can mark your place in a book,” “It can unlock things.” But after 50 or 60 uses, they run out of steam. Many ideas are anchored on the first few. Machines, on the other hand, don’t have those biases. They might generate 300 possible uses—sure, 200 of them might be terrible ideas, but they would be more divergent than what humans come up with. That’s where AI helps in sense-making. It shows us possibilities we wouldn’t have seen otherwise. Ross: Now, let’s dig into how AI can be used in the boardroom. One way that resonates with board directors is “red teaming,” where you have a decision and ask AI to generate counterarguments. AI can surface concerns that might not come up in human discussions. What other applications have you found valuable for AI in the boardroom? Christian Stadler: What I found surprising in our exercises was how disruptive AI was. Imagine a group of people who have worked together for a long time. Their discussions are smooth because they know how each other thinks. Then, I introduce ChatGPT into the meeting. I’d tell them, “Read these five pages,” and suddenly, they’re confronted with a long list of new insights. It disrupted their usual flow. At first, I thought they would hate it, but they actually liked it. It made them stop and think. The disruption forced them to break out of their usual patterns and consider ideas they wouldn’t have consciously introduced into the discussion. Ross: What are the ways in which you are seeing or applying tools to augment the foresight process? Valentina Contini: I started looking into this about two years ago, when GPT-3.5 was released. One of the things that frustrated me was that generating scenarios for companies took too long. You needed to involve multiple experts and stakeholders, which meant it only happened every three to five years. But in today’s rapidly changing world, that’s not enough. AI can accelerate the foresight process. It can help generate diverse perspectives, identify second-degree impacts, and uncover biases we might not notice. It’s especially useful in tools like a futures wheel, where many perspectives need to be mapped. AI can bring in unexpected viewpoints based on large-scale data analysis. Of course, human critical thinking is still essential—we shouldn’t accept AI outputs as absolute truth, but rather use them as a starting point. Ross: Human-AI collaboration involves complex problems where humans retain the highest-level context and decision-making ability, while AI complements our cognition. What does that interface look like? Anthea Roberts: This is one of the most fascinating questions of our time. Both humans and AI have different strengths, and the way we interact with AI is evolving. For example, when working with large language models, humans shift from being primary generators of content to being managers and editors. We direct how the AI works and refine its outputs. This requires metacognition—not just thinking about our own thinking, but also understanding how the AI thinks. One key area where AI excels is handling cognitive complexity. Humans struggle to hold thousands of variables in their heads, but AI can process vast amounts of interconnected data. The challenge is designing interfaces that allow humans to interact with this complexity in an intuitive way. A simple chat interface isn’t enough—we need tools that allow for narrowing focus, cognitive offloading, and iterative collaboration. Another challenge is balancing AI’s overwhelming amount of information with human discernment. Many people feel deluged by AI-generated content, making it crucial to develop skills for filtering and applying insights effectively. Ross: So AI not only provides information but also changes the way we think and interact with complexity? Anthea: Exactly. Over the last year and a half, I’ve realized that much of my work is metacognitive. I don’t tell people what to think, but I help them understand how they think. The same applies to AI—we need to recognize its biases, workflows, and limitations while leveraging its strengths. One of the biggest challenges will be developing interdisciplinary AI agents that can collaborate across different fields of expertise. AI will evolve into an indispensable partner in decision-making, but we need to ensure that humans remain in control of the broader context and ethical considerations. How we navigate this balance will define the future of AI-human collaboration. The post AI for Strategy Compilation (AC Ep80) appeared first on Amplifying Cognition .…
“Collective intelligence is the ability of a group to solve a wide range of problems, and it’s something that also seems to be a stable collective ability.” – Anita Williams Woolley “When you get a response from a language model, it’s a bit like a response from a crowd of people. It’s shaped by the collective judgments of countless individuals.” – Jason Burton “Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI.” – Gianni Giacomelli “We developed Conversational Swarm Intelligence to scale deliberative processes while maintaining the benefits of small group discussions.” – Louis Rosenberg About Anita Williams Woolley, Jason Burton, Gianni Giacomelli, & Louis Rosenberg Anita Williams Woolley is the Associate Dean of Research and Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. She received her doctorate from Harvard University, with subsequent research including seminal work on collective intelligence in teams, first published in Science. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF, focusing on how AI enhances synchronous and asynchronous collaboration in distributed teams. Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence. Gianni Giacomelli is the Founder of Supermind.Design and Head of Design Innovation at MIT’s Center for Collective Intelligence. He previously held a range of leadership roles in major organizations, most recently as Chief Innovation Officer at global professional services firm Genpact. He has written extensively for media and in scientific journals and is a frequent conference speaker. Louis Rosenberg is CEO and Chief Scientist of Unanimous A.I., which amplifies the intelligence of networked human groups. He earned his PhD from Stanford and has been awarded over 300 patents for virtual reality, augmented reality, and artificial intelligence technologies. He has founded a number of successful companies including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research. His new book Our Next Reality on the AI-powered Metaverse is out in March 2024. Websites: Gianni Giacomelli Louis Rosenberg University Profile: Anita Williams Woolley Jason Burton LinkedIn Profile: Anita Williams Woolley Jason Burton Gianni Giacomelli Louis Rosenberg What you will learn Understanding the power of collective intelligence How teams think smarter than individuals The role of ai in amplifying human collaboration Memory, attention, and reasoning in group decision-making Why large language models reflect collective intelligence Designing synergy between humans and ai Scaling conversations with conversational swarm intelligence Episode Resources People Thomas Malone Steve Jobs Concepts & Frameworks Transactive Memory Systems Reinforcement Learning from Human Feedback (RLHF) Conversational Swarm Intelligence Augmented Collective Intelligence (ACI) Artificial General Intelligence (AGI) Technology & AI Terms Large Language Models (LLMs) Machine Learning Collective Intelligence Artificial Intelligence (AI) Cognitive Systems Transcript Anita Williams Woolley: Individual intelligence is a concept most people are familiar with. When we’re talking about general human intelligence, it refers to a general underlying ability for people to perform across many domains. Empirically, it has been shown that measures of individual intelligence predict a person’s performance over time. It is a relatively stable attribute. For a long time, when we thought about intelligence in teams, we considered it in terms of the total intelligence of the individual members combined—the aggregate intelligence. However, in our work, we challenged that notion by conducting studies that showed some attributes of the collective—the way individuals coordinated their inputs, worked together, and amplified each other’s contributions—were not directly predictable from simply knowing the intelligence of the individual members. Collective intelligence is the ability of a group to solve a wide range of problems. It also appears to be a stable collective ability. Of course, in teams and groups, you can change individual members, and other factors may alter collective intelligence more readily than individual intelligence. However, we have observed that it remains fairly stable over time, enabling greater capability. In some cases, collective intelligence can be high or low. When a group has high collective intelligence, it is more capable of solving complex problems. I believe you also asked about artificial intelligence, right? When computer scientists work on ways to endow a machine with intelligence, they essentially provide it with the ability to reason, take in information, perceive things, identify goals and priorities, adapt, and change based on the information it receives. Humans do this quite naturally, so we don’t really think about it. Without artificial intelligence, a machine only does what it is programmed to do and nothing more. It can still perform many tasks that humans cannot, particularly computational ones. However, with artificial intelligence, a computer can make decisions and draw conclusions that even its own programmers may not fully understand the basis of. That is where things get really interesting. Ross Dawson: We’ll probably come back to that. Here at Amplifying Cognition , we focus on understanding the nature of cognition. One fascinating area of your work examines memory, attention, and reasoning as fundamental elements of cognition—not just on an individual level, but as collective memory, collective attention, and collective reasoning. I’d love to understand: What does this look like? How do collective memory, collective attention, and collective reasoning play into aggregate cognition? Anita: That’s an important question. Just as we can intervene to improve collective intelligence, we can also intervene to improve collective cognition. Memory, attention, and reasoning are three essential functions that any intelligent system—whether human, computer, or a human-computer collaboration—needs to perform. When we talk about these in collectives, we are often considering a superset of humans and human-computer collaborations. Research on collective cognition has been running parallel to studies on collective intelligence for a couple of decades. The longest-standing area of research in this field is on collective memory. A specific construct within this area is transactive memory systems. Some of my colleagues at Carnegie Mellon, including Linda Argote, have conducted significant research in this space. The idea is that a strong collective memory—through a well-constructed transactive memory system—allows a group to manage and use far more information than they could individually. Over time, individuals within a group may specialize in remembering different information. The group then develops cues to determine who is responsible for retaining which information, reducing redundancy while maximizing collective recall. As the system forms, the total capacity of information the group can manage grows considerably. Similarly, with transactive attention, we consider the total attentional capacity of a group working on a problem. Coordination is crucial—knowing where each person’s focus is, when focus should be synchronized, when attention should be divided across tasks, and how to avoid redundancies or gaps. Effective transactive attention allows groups to adapt as situations change. Collective reasoning is another fascinating area with a significant body of research. However, much of this research has been conducted in separate academic pockets. Our work aims to integrate these various threads to deepen our understanding of how collective reasoning functions. At its foundation, collective reasoning involves goal setting. A reasoning system must identify the gap between a desired state and the current state, then conceptualize what needs to be done to close that gap. A major challenge in collective reasoning is establishing a shared understanding of the group’s objectives and priorities. If members are not aligned on goals, they may decide that their time is better spent elsewhere. Thus, goal-setting and alignment are foundational to collective reasoning, ensuring that members remain engaged and motivated over time. Ross: One of the interesting insights from your paper is that large language models (LLMs) themselves are an expression of collective intelligence. I don’t think that’s something everyone fully realizes. How does that work? In what way are LLMs a form of collective intelligence? Jason Burton: Sure. The most obvious way to think about it is that LLMs are machine learning systems trained on massive amounts of text. Companies developing these language models source their text from the internet—scraping the open web, which contains natural language encapsulating the collective knowledge of countless individuals. Training a machine learning system to predict text based on this vast pool of collective knowledge is essentially a distilled form of crowdsourcing. When you query a language model, you aren’t getting a direct answer from a traditional relational database. Instead, you receive a response that reflects the most common patterns of answers given by people in the past. Beyond this, language models undergo further refinement through reinforcement learning from human feedback (RLHF). The model presents multiple response options, and humans select the best one. Over time, the system learns human preferences, meaning that every response is shaped by the collective judgments of numerous individuals. In this way, querying a language model is like consulting a crowd of people who have collectively shaped the model’s responses. Gianni Giacomelli: I view this through the lens of augmentation—augmenting collective intelligence by designing organizational structures that combine human and machine capabilities in synergy. Instead of thinking of AI as just a tool or humans as just sources of data, we need to look at how to structure processes that allow large groups of people and machines to collaborate effectively. In 2023, many became engrossed with AI itself, particularly generative AI, which in itself is an exercise in collective intelligence. These systems were trained on human-generated knowledge. But looking at AI in isolation limits our understanding. Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI. Louis Rosenberg: There are two well-known principles of human behavior: one is collective intelligence—the idea that groups can be smarter than individuals if their input is harnessed effectively. The other is conversational deliberation—where groups generate ideas, debate, surface insights, and solve problems through discussion. However, scaling these processes is difficult. If you put 500 people in a chat room, it becomes chaotic. Research shows that the ideal conversation size is five to seven people. To address this, we developed Conversational Swarm Intelligence , using AI agents in small human groups to facilitate discussions and relay key insights across overlapping subgroups. This allows us to scale deliberative processes while maintaining the benefits of small group discussions. The post Collective Intelligence Compilation (AC Ep79) appeared first on Amplifying Cognition .…
“I’m cautiously optimistic because never before has technology been as accessible as it is now—being able to interact with machines in a way that feels so natural to us, rather than in ones and zeros or more technical ways. AI shouldn’t replace what exists but augment and enhance our creativity, helping us tap into what makes us uniquely human.” – Helen Lee Kupp About Helen Lee Kupp Helen Lee Kupp is co-founder and CEO of Women Defining AI, a community of female leaders applying and driving AI. She was previously leader of strategy and analytics at Slack and co-founder of its Future Forum. She is co-author of the best-selling book “How the Future Works: Leading Flexible Teams to do the Best Work of Their Lives”. Website: Women Defining AI LinkedIn Profile: Helen Lee Kupp What you will learn Redefining collaboration in the AI era Unlocking human potential through technology Why flexible work matters more than ever The power of diverse perspectives in AI Balancing optimism and caution in AI adoption How leaders can foster innovation from the ground up Women defining AI and shaping the future Episode Resources People Gregory Bateson Nichole Sterling (co-founder of Women Defining AI) Companies & Organizations Women Defining AI Technical Terms & Concepts AI (Artificial Intelligence) Generative AI Large Language Model (LLM) Non-deterministic AI policy AI adoption Machine learning (ML) Human-in-the-loop Transcript Ross Dawson: Helen, it is a delight to have you on the show. Helen Lee Kupp: It’s good to be here. I love how we first started talking over an AI research paper. It was very random but awesome. Ross: Well, that’s pushing the edges, trying to find what’s out there and see what comes on the other side. AI is emerging, and we’re sitting alongside each other. How are you feeling about today and how humans and AI are coming together? Helen: I feel cautiously optimistic, and part of that is because I’ve been in tech for so long. Prior to getting much deeper into AI, I was working on flexible work and research around how to rethink and redesign how we, as humans, collaborate in a way that is more personalized, more customized, and helps more people bring their best selves to work and do their best work. It was serendipitous that around the same time, there was an increase in AI innovation. Now, we had technology to pair with the equation of redesigning work. COVID forced us to rethink work, not just from a people and process perspective but alongside rapid technological change. I’m cautiously optimistic because never before has technology been as accessible as it is now. We can interact with machines in a way that feels so natural rather than in ones and zeros or technical ways. Ross: I’m very aligned with that. One of the things you said was “bring your best self to work.” I think of it as human potential. If we’re creating a future of work, we have potential futures that are not so great and others that are very positive, where people express more of who they are and their capabilities. How can we create organizations like that? Helen: It starts with recognizing that everyone has different preferences and work styles. Organizations, teams, and leaders need to meet people where they are rather than force them into rigid structures that worked in the past. I often share this story—I’m deeply introverted. Despite jumping onto this podcast with you, I have always been an introvert. Navigating an extroverted world takes extra energy. In traditional office and meeting environments, I had to work harder to show up. However, when I had more diverse formats to interact with my team and leadership, it unlocked something for me. Instead of pretending to be the loudest in the room, I could find my own ways of expressing ideas—through text, written formats, or chat. It made work easier for me. When you think about how that manifests across a team, leaders and organizations must avoid putting rigid boxes around collaboration—whether it’s the hours we work or the place where we work. Increasing flexibility enables people to express themselves and bring forward ideas that might otherwise remain hidden. Ross: That’s a compelling vision. How do you bring that to reality? What do you do inside an organization to foster and enable that? Helen: One of the tools that helped in our research on the future of work and redesigning organizations is something simple—creating a team operating manual. The act of explicitly writing down the different ways we interact as a team opens up discussions. It allows for feedback: “Does this work for you? Should we try something different?” When these conversations don’t happen, implied assumptions remain—such as the norm of working in an office from nine to five. Explicitly stating and questioning these assumptions is step one. Then, organizations should give teams and managers the flexibility to define how they work within their sub-teams. Having operating manuals, sharing what works for your team, and bubbling up insights allow for a more bottom-up approach rather than a top-down one. It treats people like adults who understand their preferences and styles. Ross: That’s really nice. PepsiCo had an initiative where teams coordinated among themselves to determine their availability and collaboration methods. I wonder if we can push that further. People are often conditioned to fit into roles and adjust to their environments. Can we help people recognize their self-imposed constraints and flourish beyond them? Helen: This is where I’m cautiously optimistic about AI and how we integrate technology into work. When people start using AI, the initial question is often, “How can I do this more efficiently?” AI is a powerful tool that shortens tasks—like a calculator removing the need for mental math. However, once people move beyond efficiency, they begin asking, “What can I do differently?” AI allows us to do things we couldn’t before. It helps break conventional thinking. For example, if you use a large language model to generate 10 variations of an idea, it removes emotional bias. It shifts the conversation from defending one perspective to evaluating multiple ideas. This fosters creative discourse and integrates seamlessly into workflows without feeling like extra work. AI should not replace what exists but augment and enhance our creativity—helping us tap into what makes us uniquely human. Ross: So, AI helps individuals bring different perspectives and expand their thinking? Helen: Exactly. One of my favorite things to do with large language models is to open up the funnel. Whether it’s brainstorming writing styles, problem-solving, or scoping solutions, AI presents multiple potential paths. This reminds us that there is no single correct answer—only possibilities to explore. Ross: Gregory Bateson said wisdom comes from multiple perspectives. We now have multiple perspectives on demand. You work with leaders to redesign organizations. What guidance do you suggest? How can organizations evolve from existing structures? Helen: I don’t have the perfect answer for what the shape of organizations should be. However, we’ve been transitioning from hierarchical structures to teams-of-teams for a while, with varying success. The biggest challenge is breaking out of our mental paradigms of control. Flexible work means allowing managers and teams to design their workdays and collaboration methods rather than enforcing a company-wide approach. AI introduces another paradigm shift—it behaves unpredictably compared to traditional technology. Leaders must accept that they don’t have all the answers. Some of the best AI-driven innovations come from employees who work closely with the technology daily. For example, a data scientist evaluating AI’s role in data processing can quickly identify where it adds value and where it falls short. These innovations emerge at the edges, from individuals experimenting in real time. Leaders must create environments where experimentation, sharing, and collaboration thrive. Instead of dictating policies top-down, they should spotlight grassroots innovations and scale them across the organization. Ross: So, you’re describing emergence—where leaders set conditions for innovation rather than dictate precise rules? Helen: Exactly. Constraints breed creativity. If there are no guardrails or structures, people stick to the status quo and don’t innovate. Leaders must provide the right nudges—whether through hackathons, dedicated experimentation time, or open Slack channels to share discoveries. Some organizations set up “experiment hours”—weekly meetings where teams explore AI applications in a low-pressure, fun environment. This fosters creativity and keeps innovation moving. Ross: That’s a great example. Speaking of multiple perspectives, one of your recent ventures is Women Defining AI. What is it about? Helen: Women Defining AI started as an experiment about a year and a half ago. I had been working with generative AI models and noticed a significant gender gap in AI adoption. Data showed men adopting AI at higher rates than women, and anecdotally, I saw the same trend. Initially, it was just a study group where I shared what I was learning with other women. Within days, 50 people joined, and by month two, we had 150 members. It became clear that women wanted a space to ask questions, learn together, and experiment without judgment. Now, Women Defining AI is a virtual community that helps women at different stages of their AI journey. Whether it’s understanding AI’s role in their work, automating tasks, or building solutions, we guide them in gaining technical confidence and shaping the field. Some members have landed AI-related jobs or joined AI policy teams at their organizations. Having diverse perspectives in AI is crucial. Women in our community, particularly those from HR and other industries, quickly identify biases and blind spots that might otherwise go unnoticed. We need more voices questioning and shaping AI while we’re still in its early stages. Ross: That’s fantastic. Looking ahead to 2026, what excites you most? Helen: Personally, I’m excited about having our third baby! It’s a reminder of the new perspectives each generation brings. For Women Defining AI, 2025 will be the year we build in public. We’ve been experimenting and learning internally, but now we’re sharing real stories and projects to inspire more builders and technologists. Ross: That’s fantastic. Thank you for your time, insights, energy, and passion. Helen: Thanks for having me. The post Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78) appeared first on Amplifying Cognition .…
“Generative AI is the first technology with an almost natural propensity to build a symbiotic relationship with us. But symbiosis isn’t always mutualistic—it can be parasitic, where AI benefits at the detriment of humans. How we deploy AI will determine which path we take.” – Alexandra Diening “AI provides dual affordances—it can automate our work or augment our abilities. The key challenge is deciding where to draw the line. In low-stakes tasks, automation makes sense. But in high-stakes decision-making, human intuition is irreplaceable.” – Mohammad Hossein Jarrahi “We talk a lot about lifelong learning, but we also need to embrace lifelong forgetting. If we keep piling new knowledge on top of outdated thinking, we won’t evolve. The future isn’t about ‘us vs. them’—it’s about humans and AI co-evolving together.” – Erica Orange “AI isn’t just changing how we work—it’s changing what it means to be human. We are interlacing with technology more deeply than ever, and in the future, AI won’t just be something we use—it will be something we integrate into ourselves.” – Pedro Uria Recio About Alexandra Diening, Mohammad Hossein Jarrahi, Erica Orange, & Pedro Uria Recio Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis . Mohammad Hossein Jarrahi is Associate Professor at the School of Information and Library Science at University of North Carolina at Chapel Hill. He has won numerous awards for teaching and his papers, including for his article “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.” His wide-ranging research spans many aspects of the social and organizational implications of information and communication technologies. Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024. Pedro Uria-Recio is a highly experienced analytics and AI executive. He was until recently the Chief Analytics and AI Officer at True Corporation, Thailand’s leading telecom company, and is about to announce his next position. He is also author of the recently launched book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity . He was previously a consultant at McKinsey and is on the Forbes Tech Council. Websites: Alexadra Diening Mohammad Hossein Jarrahi Erica Orange Pedro Uria Recio LinkedIn Profiles: Alexandra Diening Mohammad Hossein Jarrahi Erica Orange Pedro Uria Recio What you will learn Understanding human-AI symbiosis and its impact Why AI can be mutualistic or parasitic The crucial role of human intuition in AI decision-making How automation and augmentation shape the future of work Rethinking AI deployment beyond traditional software models The need for lifelong forgetting to adapt to AI advancements How AI could transform humanity through deep integration Episode Resources Companies & Organizations Human-AI Symbiosis Alliance IBM OpenAI NPR Books & Publications AI and the New Human Frontier (by Erica Orange) Machines of Tomorrow (by Pedro Uria Recio) Technical Terms & Concepts Human-AI symbiosis Generative AI Automation vs. augmentation Algorithmic management Brain-computer interfaces Deep learning Data bias AI literacy AI product lifecycle Holistic decision-making Lifelong learning Transcript Ross Dawson: So, you’ve recently established the Human-AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey? Alexandra Diening: It’s a long journey. I’ll try to make it short and interesting. I entered the world of AI almost two decades ago through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Naturally, if you want to process all the neuroscience data, you can’t do it alone. You inevitably have to touch upon AI. That was my gateway into the field. As I started working with AI, I gained a basic understanding of how it operates from a technical perspective as a scientific discipline. At that time, there weren’t many people working in this kind of AI, so the industry naturally pulled me in. I started working in the business application of AI, progressively shifting from neuroscience to AI deployment within a business context. I worked with Fortune 500 companies across life sciences, retail, finance, and many more industries. That was my entry—my “chapter one”—into the world of AI. But as I began deploying AI within real businesses, I started noticing patterns. Sometimes AI projects succeeded, and sometimes they failed. I realized that success was most often achieved when we doubled down on human-centricity. That was an easy concept for me to grasp because cognitive science is my foundation. This human-centric approach became even more important with the emergence of generative AI. AI was no longer just in the background, crunching data and steering our decisions without us realizing it. AI has been around for quite some time, but suddenly, we could interact with it directly, almost like an agent. We could communicate with it using our language. It could capture emotions, build relationships with us, and augment our capabilities. It was no longer just a tool—it was becoming a social-technological actor. This realization led us to our hypothesis: generative AI is the first technology with an almost natural, almost default propensity to form a symbiotic relationship with humans. It’s not just a tool that does something or doesn’t—it’s about mutual interaction. The term “symbiosis” sounds very romantic, particularly because of the way pop culture has shaped our understanding of it. But in nature, symbiosis manifests across a spectrum of outcomes. It can be highly positive and mutualistic, where both parties benefit—humans improve, and AI gets better. However, it can also be parasitic, where one party benefits at the detriment of the other. This pattern became clear to me, especially as generative AI adoption increased. I saw the emergence of what I call “parasitic AI,” and that realization started stealing my sleep. I was no longer proud of the AI world we were building. At the time, I was working for a multibillion-dollar tech company, and I doubled down on advocating for responsible AI and human-centric practices. But even with all the support in the world, I quickly realized that corporate agendas and business impediments limited the impact I could make. That’s why we established the Human-AI Symbiosis Alliance. Our goal is twofold: first, to educate people that AI can be parasitic. It’s not just a happy story, and it’s not simply about AI taking over—it’s about how we deploy it. Second, we want to teach and empower companies to steer AI development away from parasitism and toward mutualistic AI. Ross: We are deeply immersed in digital environments, and these systems are becoming increasingly human-like. You mentioned the idea of positive symbiosis. Achieving that requires well-designed systems and an understanding of how humans behave. What do you see as the foundational leverage points that can shift us toward a positive and constructive symbiosis between humans and AI? Alexandra: The most important realization is that AI is not a living entity. It’s just a large dataset. It doesn’t have consciousness, intent, or agency. Instead of seeing AI as something that will inherently harm us, we need to take responsibility for how we deploy it. Of course, we need to ensure AI is properly regulated, that it is trained on unbiased data, and that we establish appropriate guardrails. But there’s another chapter of the conversation that very few people talk about, and it keeps me up at night: the way we deploy AI. Deploying AI in a way that doesn’t harm individuals or companies is critical. No company wants to build parasitic AI within its environment. The main issue in deployment comes from literacy. Many software engineering companies are now venturing into AI without realizing that AI development is fundamentally different from traditional software development. You cannot deploy AI the same way you deploy web pages or apps. It has a completely different lifecycle, set of activities, and expertise requirements. Raising awareness about this difference is crucial. Beyond that, we need frameworks—structured processes that guide responsible AI deployment. We also need to recognize that AI is not just a technology we implement; it’s a symbiotic relationship we must architect. That means not only enhancing employee efficiency in the short term but also ensuring that AI doesn’t erode human skills over time. Otherwise, we risk creating a workforce that is highly efficient but, in the long run, less capable. Another crucial element is measurement. The traditional ways we measure technology success—primarily through productivity and efficiency—are outdated for AI. We need to consider additional factors, such as how AI impacts innovation, employee well-being, and a company’s brand relationships. Instead of being shortsighted, we need a long-term focus on AI’s broader impact. Finally, AI brings entirely new risks, many of which are unprecedented. A very personal and tragic example is the case of a teenager who took his own life after interacting with an AI chatbot. When I used to warn clients about the importance of setting the right level of anthropomorphism and properly guarding AI to prevent harm, it often felt abstract. But now, unfortunately, we have a very tangible example of how things can go wrong. The key takeaway is that building a responsible, mutualistic AI requires expertise, proper architectural planning, accurate measurement frameworks, and a heightened awareness of risks. If we get those things right, we can steer AI away from parasitism and toward a future where it genuinely benefits society. Ross: In this section, we hear from Mohammad Hossein Jarrahi, Associate Professor at the University of North Carolina, Chapel Hill, from Episode 62. Ross: So, you have been focusing on human-AI symbiosis. I’d love to hear how you came to believe this is where you should be focusing your energy and attention. Mohammad Hossein Jarrahi: It was in 2017, and I was stuck in traffic. If I want to tell you the story, there was an IBM engineer being interviewed on NPR. They were asking him a bunch of questions about the future of AI. This was before the rise of ChatGPT and what I would call the consumerization of AI. As I was sitting in traffic with not much to do, something clicked. The engineer was providing examples that fit into three categories: uncertainty, complexity, and eco-locality. As soon as I got home, I immediately started sketching out an article and finished writing it within two weeks. The idea was that we, as humans, have very unique capabilities, but we tend to underestimate them. At the same time, the smart technologies we see today—at that time, primarily powered by deep learning—are inherently different from previous information technologies. This means we need a completely different paradigm to understand how humans and AI can work together. AI isn’t going to make us extinct, but we shouldn’t treat it as just another infrastructure technology, like Skype or other traditional communication tools. That’s when I realized that the term human-AI symbiosis —which comes from biology—was a perfect way to describe how two sources of intelligence can work together. Ross: That concept is very much aligned with my work and the people I engage with. The key question is, how do we make it happen? There are quite a few people exploring this path, but we don’t yet have all the answers. What are some of the pathways that could move us toward effective human-AI symbiosis? Mohammad: It really depends on the context. That’s the crux of the issue I’ve been exploring in my articles. The question of how much we can delegate to AI isn’t black and white. It exists on a spectrum between automation and augmentation. AI provides dual affordances —it can automate tasks or augment human capabilities. Automation means AI performs tasks autonomously with minimal supervision. Augmentation, on the other hand, keeps humans deeply involved, making them more efficient and effective. The balance between automation and augmentation depends on the context: In low-stakes decision-making , we see more automation. Many mundane tasks can be offloaded to algorithms. In high-stakes decision-making , such as in medicine, human experts need to stay in the loop for accountability reasons. These scenarios require more augmentation than automation. Machines are excellent at handling tasks that are repetitive, data-centric, and do not require intuition or emotional intelligence. However, humans excel at exception handling —making nuanced judgment calls. For example, consider loan applications : AI can efficiently process thousands of applications at once, using data to determine approvals. However, if an application is denied, a human might review it and see contextual factors —perhaps the applicant had financial troubles in the past but has shown stability in recent years. This kind of intuitive decision-making is something AI struggles with. That’s why, when it comes to organizational decision-making , AI shouldn’t be the sole authority. Stakeholder interests are often in conflict —what benefits shareholders may harm employees or customers. AI tends to optimize for one metric, but a human leader must strike a balance among competing priorities. Ross: I think a lot about the architecture of AI integration. Keeping humans in the loop is important, but where should humans be involved? That depends on the organization, decision type, and context. Are there structured ways we can design points of human involvement —whether in exceptions, approvals, or shaping judgment? Mohammad: The simplest answer is that humans should be involved whenever intuition is required. In my article on human-AI symbiosis , I described two decision-making styles: Analytical decision-making – Data-driven and highly structured. AI has largely mastered this area. Intuition-based decision-making – Often subconscious, difficult to quantify, and essential in complex scenarios. For example, in algorithmic management , AI can assist managers, but the higher you go in an organization, the more important intuition becomes. Research in management and psychology has shown that holistic decision-making —which accounts for multiple stakeholders—relies heavily on intuition. If AI only optimizes decisions based on data, it risks missing broader considerations , such as company culture, long-term brand impact, or ethical concerns. That’s why judgment calls must remain in human hands. Ross: Next, we hear from Erica Orange, futurist and author of AI and the New Human Frontier , from Episode 59. Ross: What will allow us to master AI and ensure it benefits humanity? Erica Orange: That’s such a great question. I often talk about the difference between lifelong learning and lifelong forgetting. It’s common to hear that we should all be lifelong learners —constantly acquiring new knowledge to stay relevant. But if we keep layering new information on top of outdated thinking , we won’t truly evolve. We must also become lifelong forgetters —letting go of outdated assumptions, biases, and ways of working. I often tell my clients and audiences to identify one or two things they’re holding onto that no longer serve them. It could be a belief, a work habit, or an outdated mental model. The faster we embrace forgetting , the more space we free up for new ways of thinking. Another key point is to embrace the “AND” mindset instead of thinking in polarized extremes. We live in a world of hyper-polarization—social media echo chambers and tribalism reinforce “us vs. them” thinking. But the future isn’t either-or —it’s about “and.” For example, when discussing humans and AI , there’s often fear of an “AI takeover.” But AI isn’t replacing us—it’s collaborating with us. The reality is one of coexistence and co-evolution. The same applies to progress and stagnation, chaos and creativity, imagination and inertia —these forces always exist together. Ross: Finally, we hear from Pedro Uria Recio, author of Machines of Tomorrow , from Episode 50. Pedro Uria Recio: In Machines of Tomorrow , I explore AI through human history. From ancient aspirations of creating human-like machines to today’s generative AI revolution , AI has always been intertwined with our progress. One of the book’s key concepts is interlacing —the idea that humans and AI will become more intimately connected. Right now, we use smartphones for everything. The fact that they exist outside our bodies is merely an anecdote —in the future, they will be inside us. Brain-computer interfaces, robotics, and AI-driven medicine will interlace humans and AI , potentially transforming humanity into a new species. This shift won’t happen overnight, but AI will be central to our evolution. Ross: That wraps up this episode. Thank you to all our guests for their incredible insights on human-AI symbiosis. The post Human AI Symbiosis Compilation (AC Ep77) appeared first on Amplifying Cognition .…
“What I argue is you need to supplement what your brain can manage on its own. And this is where I think AI comes in… blending the human imagination together with AI’s ability to crunch massive amounts of data—that’s where I think we’re going to see a lot of power in the world of strategy.” – Rita McGrath About Rita McGrath Rita McGrath is one of the world’s top experts on strategy and innovation. She is consistently ranked among the top 10 management thinkers globally and has earned the #1 award for strategy by Thinkers 50. She is Professor of Strategy at Columbia Business School, and Founder of the Rita McGrath Group and Valize LLC. Her books include The End of Competitive Advantage and Seeing Around Corners . Website: Rita McGrath Valize LinkedIn Profile: Rita McGrath University Profile: Rita McGrath What you will learn Navigating the acceleration of business and strategy Understanding inflection points and their impact on industries How AI enhances human decision-making and sense-making Why competitive advantages are becoming more transient The surprising link between digital habits and declining gum sales The future of consulting and professional services in an AI-driven world How leaders can prepare for the evolving nature of work Episode Resources People Clayton Christensen Ray Kurzweil Brian Chesky David Maister Companies & Organizations Klarna Airbnb Valize Books The End of Competitive Advantage The Living Company The Skill Code Technical Terms & Concepts Transient advantage Disruptive technology Inflection points Digitalization Dematerialization Sense-making Strategic thinking Gig economy Circular economy Automation Competitive advantage Transcript Ross Dawson: Rita, it is fantastic to have you on the show. Rita McGrath: Thank you very much for inviting me. Ross: So my personal experience is that, over time, the world has come towards me, and what I’ve been thinking has become more and more of a reality. That strikes me very much with your work. I think you’ve been incredibly prescient. A lot of the themes you’ve worked on for years are even more relevant today than they were earlier. Has that been your feeling? Rita: It has. It has. I mean, I was writing about what we would now recognize as the lean startup movement back in the ’90s. Clayton Christensen and I were working together on his idea of disruptive technology. My book The End of Competitive Advantage , which basically argued that competitive advantages last for shorter and shorter periods of time, came out in 2013, and people are still saying, “Wow, that’s so interesting.” So it is that kind of feeling. Ross: In particular, you’ve talked about transient advantage. A very long time ago, that advantage has become more and more transient, which we can frame as acceleration. And I think that there used to be a bit of debate— is the world accelerating, or is it just a feeling that it’s accelerating? So what’s your perception today in terms of where we might move forward, especially regarding the pace of change in business, strategy, and competitive advantage? Is this acceleration likely to continue? Rita: Yes. To quote Ray Kurzweil, any system that embeds experience-based learning—trial and error learning—tends to follow an exponential change pattern. It’s not additional, it’s not linear—it’s exponential. And we, as human beings, experience that as things moving faster and faster. So, day one, it’s two. Day two, it’s four. Day three, it’s eight. Eventually, these exponential curves take off, and I think we’re seeing quite a bit of that with the current developments in AI at the moment. Ross: You’ve pointed to this theme of inflection points. How would you frame some of the current developments in AI or its impact on business around that theme? Are we living through an inflection point or a phase—or a series of them at the moment? Rita: Yeah, I believe we are. And I would say that there are multiple levels of inflection points. At a 30,000-foot level, if you think about the financial and social structures of capitalist systems, they go through these 50- to 70-year cycles each time a new technology emerges that dramatically changes our ability to do something. Going all the way back to the 1700s and the original Industrial Revolution, what you see happening is what I define as an inflection point—something that creates a 10x shift in what’s possible. In the Industrial Revolution, labor was automated. Then we had the mass production era—cars, suburbs, petroleum-based economies—that has been coming to the end of its S curve of delivering prosperity and productivity. The next wave is really this era of digitalization, which I would date to the early ’70s, with the microprocessor and the earliest digital technologies. What digitalization does is change what’s possible by a factor of 10. Some of the effects are quite surprising. For example, one of them is dematerialization —taking things that used to require their own physical device or material and digitizing them. Songs are a great example. Back in the day, you had to buy records, have a record player, and all the associated technology. Later, it was CDs, where you had to pay for 18 songs when you just wanted one. Today, we don’t even buy songs anymore—we stream them on demand on a device that doesn’t require physical input. It’s all digital. Ross: One of the key themes is strategy. How do we do strategy in a world where acceleration is happening, where there are all these overlaying themes, and these 10x shifts across different dimensions of work? This brings us to human capabilities. Humans have finite cognition, even though we have extraordinary capabilities far beyond anything else. And now, we have AI to augment, support, or complement us. I’d like to dive deeper into this, but to start, how do you frame human capabilities in strategic thinking today, and how are they complemented by AI? Rita: Sure. Well, I think, as I mentioned, human brains think in linear terms. We think immediately in terms of, I have to get from here to there to avoid a predator , and back when we were evolving, that worked pretty well. But we don’t handle exponential systems well because they appear small and insignificant—until suddenly they don’t. It’s this whole gradually, then suddenly idea. What I argue is that we need to supplement what our brains can manage on their own, and this is where AI comes in. What I’ve set up with companies is a series of what I call time zero events —signals that indicate a future inflection point is approaching. We don’t know exactly when, but then we work backward and ask, Before that happens, what conditions need to be in place? AI makes the process of sensing signals much faster and more efficient. It supplements our brains by sorting through massive amounts of data, identifying patterns, and alerting us to relevant developments. AI isn’t very good at imagination—it’s better at hallucinations than true creativity right now. But by blending human imagination with AI’s ability to process vast amounts of data, we can create powerful tools for strategy. Ross: That’s fabulous. Rita, it’s fantastic to see the body of your work. And I think not just the open-mindedness, but also the questions you’ve asked, have anticipated the world we’re living in today. Your work is extraordinarily relevant. So where can people go to find out more about your work? Rita: Well, ritamcgrath.com is a good place to start. That’s my personal website, where you can find all kinds of information, downloadable articles, and so forth. I also publish regularly on LinkedIn, Medium, and Substack, so you can find me in those places. And for those who might be interested in more of an advisory touch, I have a sister company called Valize —that’s V-A-L-I-Z-E—that is figuring out what this new model for consulting is going to look like. I’m not sure we have the answer yet, but we’re certainly happy to go on the journey to figure it out. Ross: Fantastic. Thank you so much for your time and insights, Rita. Rita: Thanks. The post Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76) appeared first on Amplifying Cognition .…
“AI can be an unusual voice that gives you fresh ideas, makes you think differently, and provides the kind of fuel that sparks innovation. But ultimately, humans provide the context, the judgment, and the ability to bring strategy to life.” – Christian Stadler About Christian Stadler Christian Stadler is a professor of strategic management at Warwick Business School. He is author of Open Strategy, which was named as a Best Business Book by Financial Times and Strategy + Business and has been translated into 11 languages. His work has been featured in Harvard Business Review, New York Times, Wall Street Journal, CNN, BBC, and Al Jazeera, among others. Website: Christian Stadler LinkedIn Profile: Christian Stadler University Profile: Christian Stadler What you will learn How AI is changing strategic decision-making The role of AI as a co-strategist, not a replacement Why AI disrupts but enhances boardroom discussions How open strategy leads to better execution Leveraging collective intelligence for stronger strategies The rising importance of political awareness in leadership Engaging employees to drive innovation and strategy Episode Resources Companies & Organizations Amazon IBM Books Open Strategy Technical Terms & Concepts Scenario planning Red-teaming Large language models (LLMs) ChatGPT Collective intelligence Strategic decision-making Strategy execution H-1B visas Employee engagement in strategy Transcript Ross Dawson : Christian, it’s a delight to have you on the show. Christian Stadler : Thanks for having me, Ross. It is a delight for me as well. Ross : So, you have been delving deep into a lot of your background in open strategy. You’ve also been looking at the role of AI in strategy and strategic decision-making. At a high level, how do you see the role of AI in strategy making today? Christian : I’m an optimist. I think generally, by nature, and also when it comes to how AI can actually be useful for strategists, more and more people are coming to see AI as a partner in many different areas of what we do. I think that’s true for strategy as well. We have some form of co-decision-making, co-intelligence, or an additional voice that we can use in the strategy-making process. For that, it’s really cool. Ross : These are human-first processes, I suppose. The more complex the decisions are, the more multifaceted they become, and the more the human element needs to be at the forefront. Strategy seems to fall into that category. What are the places where AI might provide support, complementary perspectives, or analysis that are particularly valuable? Christian : Strategy, obviously, consists of different “boxes” or activities. Some involve coming up with new ideas—something new you want to do in your strategy. Other parts involve fine-tuning and formulating the strategy. Then there’s the execution and implementation side. Probably in each of these aspects, it makes sense to use AI in slightly different ways. When it comes to ideation, I can ask a tool for ideas, such as setting up a new product line. I played around with this early on when ChatGPT started gaining traction. Even then, it was phenomenally good if you guided the conversation as a strategist. If you just ask ChatGPT, you get generic suggestions, and sometimes they don’t make sense. For example, I once asked for a suggestion for a streaming service. One idea was to create some form of entertainment platform and partner with universities. Being a professor, I know that universities don’t work like that. Professors aren’t told to participate by some central directive. You need to find ways to motivate individual professors. As I pushed the platform further, better ideas came up. As long as you drive the conversation and are smart about it, AI can provide good ideas. When it comes to fine-tuning and formulating, the tool can be quick. I’ve been experimenting with a company in Austria for over a year. They make sneakers—Gieswein. We tried seeing what happens in board meetings when we bring ChatGPT into the mix. For instance, when we needed a press release, the tool quickly drafted something. In this case, we didn’t need an agency, which saved time and resources. However, when it comes to execution, that’s more of a human game. You need to convince people to buy into ideas and feel comfortable with new directions. AI has limitations here, but other tools can help involve more people. Greater involvement aids implementation. Ross : There’s a lot there I’d like to dig into. We might do a bit of hopping around. Christian : It’s a bit long-winded, isn’t it? I just keep talking on and on. My bad. Ross : It’s all good. One interesting point is that part of Amazon’s internal processes involves starting with a press release for a potential product. Then they work backward to figure out how to achieve it. That’s something ChatGPT can facilitate in board meetings. You can draft a press release and discuss if this is something you want to pursue. Digging into the boardroom specifically, how do you see AI being valuable when working with a group of directors? For instance, red-teaming—having the AI critique decisions—seems promising. What are other potential applications? Christian : One surprising and insightful aspect of using ChatGPT in boardrooms was its disruptive nature. Imagine a group that has worked together for a long time. The process is smooth because they know how each other thinks. Then ChatGPT comes in and disrupts the flow. For instance, I might ask someone to read a page of suggestions from ChatGPT mid-meeting. It forces the group to stop and think. Initially, I thought they would hate it, but they actually liked it. The disruption brought up ideas that wouldn’t have otherwise come up in the discussion. It changed the process in a positive way, rather than simply adding information. Ross : So you were distilling conversations, summarizing, and presenting them to the board? Christian : Essentially, I was a disturbance. For example, when discussing market entry into the U.S., I’d interrupt and say, “Here’s what ChatGPT suggests.” Having to read and discuss AI-generated content mid-meeting isn’t smooth, but that lack of smoothness was beneficial. Ross : That reinforces the idea that the facilitator plays a critical role. You acted as an AI-enabled facilitator. Your choice of interventions determined the success of the process. Christian : Absolutely. We tried different approaches: preparing content beforehand, engaging during the meeting, and doing post-meeting analysis. When the tool worked independently, the output was too superficial. It needed human direction. I’m not an industry expert, but with a feel for strategy, you can create significant benefits. Ross : One of the specific applications of AI is strategic decision-making, where you already know what decision needs to be made. The decision-making process typically involves defining the decision, generating options, assessing those options, and ultimately making a choice. AI can assist with ideation and evaluating options. How do you see AI’s role evolving in formal strategic decision-making processes, both today and in the future? Christian : You mentioned options, and I’ve always been a big fan of scenario planning—drawing pictures of what the future could look like in various versions. Some companies use two scenarios; others prefer four, making it more complex. Whatever strategy you pursue, it needs to be tested against these different futures. AI tools like ChatGPT are excellent at generating plausible future scenarios. Of course, human direction is necessary, but AI can support the process. Writing compelling, coherent stories about the future is a skill not everyone possesses, and AI can facilitate this. For now, I see AI primarily as a facilitator. In the medium term, it helps strategists think through different possibilities. Whether AI will ever be capable of independent strategic thinking, I can’t say—I’m no magician. But for now, its power lies in augmenting human intelligence rather than replacing it. Ross : In some of your work, you’ve referenced ethical concerns. Humans have the ability to grasp broader context, values, and the human experience in ways AI cannot. Personally, I believe AI will remain a strong supporting tool, but I doubt it will take over higher-order strategic decision-making. Christian : I agree. As you know, I’ve long advocated for involving more people in strategy-making. Opening up the process brings in fresh, unconventional voices, leading to better strategies. AI can serve as one of those voices—offering unexpected insights that force us to think differently. However, human context is essential. AI-generated suggestions must be assessed within the company’s reality. AI becomes even more valuable when integrated with internal company data, allowing for more tailored insights. That said, I’m cautious about assuming all relevant data is neatly captured. In big tech, this might be the case, but for many medium-sized businesses, it’s not. For example, I spoke with a CEO who runs a company that makes high-end ski gloves. His strategic decisions—what products to produce and in what quantities—aren’t based on hard data. Instead, he relies on conversations with retailers and industry experts. This highlights a limitation of AI: in many cases, businesses don’t have the vast datasets AI needs to be truly effective. Ross : Let’s dig into open strategy. Could you provide a simple framing of what open strategy is? And how does it connect to AI? Christian : The easiest way to understand open strategy is to contrast it with traditional strategy-making. Historically, strategy was developed behind closed doors by a small group—perhaps with the help of a consulting firm. Open strategy, on the other hand, involves bringing in more voices. This approach not only generates fresher, better ideas but also makes execution smoother. The majority of failed strategies don’t fail because they were bad ideas—they fail due to poor execution. Various surveys suggest that up to 90% of strategic failures stem from execution issues. When people are involved in strategy-making, they develop buy-in. They also begin to see how strategic goals connect to their work. In our book Open Strategy , we surveyed executives who had implemented open strategy. About 69% said it led to better ideas, and 70% noted that execution was significantly improved. Ross : We can think of open strategy in different layers. One layer involves opening strategy within the organization, allowing all employees to participate. Another layer involves engaging external stakeholders—partners, suppliers, customers, or even the public. What are your thoughts on these different levels of openness? Christian : Absolutely. There are different degrees of openness. You don’t necessarily have to involve all employees—you might just expand participation beyond the usual small group. This is the most common approach and brings significant benefits, even in hierarchical organizations. For example, I worked with a company in the Middle East, where hierarchy is deeply ingrained. Initially, there was hesitation about involving middle management in strategy-making. Eventually, they agreed, and the results were fantastic. It helped align the organization behind the new strategy. In this case, we first collected middle management’s input separately because they might have hesitated to speak openly in front of top executives. Later, we shared their insights with leadership. This process built enough trust that in a subsequent round, both groups could participate together. As for external engagement, it depends on the phase of strategy-making. During the ideation phase, involving external voices can be valuable. You don’t need to share company secrets—just frame the challenge broadly and let external contributors provide fresh ideas. Even the U.S. military has done this. The Pentagon has held open exercises where the public contributes strategic insights, but they don’t necessarily disclose how those insights are used. For execution, however, you want broader internal involvement. Everyone in the company needs to understand what’s happening. IBM ran one of the largest open strategy initiatives, involving 160,000 participants. Managing a discussion at that scale requires AI-powered tools to structure and synthesize input. Ross : Open strategy can be seen as a form of collective intelligence. Whether it’s eight board members, 100 managers, or an entire organization, the challenge is structuring participation effectively. What’s the state of the art in integrating diverse perspectives into a coherent strategy? How can we improve? Christian : I have to admit, I’m still a bit old-school when it comes to strategy. I prefer in-person workshops because they allow for deeper discussions. That said, large-scale engagements require digital tools. A structured approach is key. One method is to start with a broad survey to identify major trends. This helps leadership gauge the organization’s sentiment. Understanding what people think is happening is just as important as knowing what’s actually happening. If leadership’s actions contradict employees’ perceptions, it can create resistance. Next, bring people into structured workshops. Different teams can develop and pitch ideas. A “Dragon’s Den” format works well—teams compete to refine the best ideas. Facilitators play a crucial role in guiding discussions and ensuring productive outcomes. Ultimately, open strategy isn’t about turning companies into democracies where everyone votes on decisions. Instead, leadership uses the insights generated through participation to make informed choices. The key is communicating back to employees—explaining what decisions were made and why. People don’t expect to be the final decision-makers, but they value having a voice. Ross : In a world of accelerating change—particularly with AI—leaders need to refine new capabilities. What skills do senior executives, board members, and strategy-makers need to be effective in today’s landscape? Christian : First, they need to engage with AI. It’s as simple as replacing Google with an AI tool when searching for information. Play around with it, get familiar. These models are user-friendly, and you don’t need programming skills to experiment. Second, leaders must navigate the increasing entanglement between business and politics. In past decades, it was easy to overlook politics, but that illusion is gone. Leaders must understand how to operate in politically charged environments. It’s not about whether a company is conservative or liberal—successful brands exist at both extremes. Nike is known for progressive values, while Chick-fil-A is conservative, yet both maintain broad customer bases. The problem arises when companies appear inconsistent or opportunistic. For example, many big tech firms once positioned themselves as liberal but later courted the Trump administration. That shift alienated both sides. Finally, attracting and retaining talent is critical. Many CEOs cite talent shortages as their top concern. Engaging employees, making work meaningful, and fostering motivation will be key leadership skills. Ross : That ties into the aging workforce issue. Countries with declining populations are often the most receptive to AI and robotics. This shifts the workforce balance, making talent attraction even more crucial. Christian : Absolutely. Immigration could help, but politically, it’s difficult. Interestingly, even Elon Musk—despite his conservative shift—supports H-1B visas because he recognizes the need for skilled immigrants. Ross : As we wrap up, what excites you most about your upcoming research, particularly regarding AI and strategy? Christian : I’m exploring the intersection of corporate strategy and individual decision-making. I’m also using AI to analyze emotions in strategic discussions. For example, I’m working with IBM data to study how emotions impact idea adoption in large-scale strategy sessions. Using AI for research is something I really enjoy. Ross : That sounds fascinating. Thank you for sharing your insights! Christian : Thank you! It was a pleasure. The post Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75) appeared first on Amplifying Cognition .…
“We don’t just give creative thinking to the AI, but we actually use the AI to make space for our own creative thinking.” – Valentina Contini About Valentina Contini Valentina Contini is an innovation strategist for a global IT services firm, a technofuturist, and speaker. She has a background in engineering, innovation design, AI-powered foresight, and biohacking. Her previous work includes founding the Innovation Lab at Porsche. Website: Valentina Contini LinkedIn Profile: Valentina Contini What you will learn Exploring the power of being a professional black sheep Using AI as a creative sparring partner Bridging the gap between ideas and visuals with AI tools Accelerating foresight processes through generative AI Unlocking human potential with AI-augmented creativity Envisioning immersive future scenarios with digital personas Embracing technology to make space for critical thinking Episode Resources People Leonardo da Vinci Refik Anadol Companies NTT Technical Terms AI (Artificial Intelligence) Generative AI Brain-computer interfaces Digital twin Futures wheel Speculative design Large language models (LLM) Quantum computing Decentralization Transcript Ross Dawson : Valentina, it’s awesome to have you on the show. Valentina Contini : Oh, thank you. Thank you for inviting me here. Ross : So, you call yourself a professional black sheep. That sounds like a good job to me. So what does that mean? Valentina : On LinkedIn, a lot of people have very nice, amazing titles or super inspirational quotes. And for me, it was always like, what am I actually? After a bit of thinking, I realized that wherever I am, I am actually always the one that is different. In the past, as a mechanical engineer, I was building cars for 15 years. That’s kind of weird if you are a woman, and also not really looking like the standard engineer. Then I changed jobs, and I always ended up being the different one. I was in strategy consulting for a bit, and again, being an engineer in a strategy consulting role was the weird thing—it was not normal. So I’m always the weird one. I think that “professional black sheep” pretty much describes that. Ross : Well, I think the future is in being weird. I mean, if you’re not weird, then you’re probably not gonna have a job. If you are weird, then you probably will. Valentina : Yeah, definitely, definitely. I think that’s the main selling point right now. Ross : So, innovation strategy, I think, is probably a reasonable description of a lot of what you do at the moment. Starting from that, you augment yourself in many ways—you augment your work and so on. How can we augment the process of innovating, making the new faster and better? What are the elements of that? What does that look like? Valentina : I think a big part of it comes now thanks to AI, for a very specific reason. Since the pandemic, we are not really spending time in working environments together with other people in the same place. There is less of this exchange that creates innovation and creativity or sparks something out of a random discussion. Generative AI, with the leap it made in the last year, is like your sparring partner that you always have without needing to be among other people. What is interesting is that generative AI is not just one person—it’s collective knowledge from many people. It has many downsides as well, but focusing on this, I can access many people at the same time when I use a tool like generative AI. Ross : So that’s, in a way, an individual tool. It’s a creative sparring partner or can augment our creativity. I think we can maybe come back to some of that in various ways, but thinking about an organizational level—going from individual creativity to an innovation process where the organization innovates—what are some of the other pieces of that puzzle? Valentina : You can use it in many different steps of the way. I think another very important piece is using AI for automating easy, repetitive, and boring tasks so that employees have more time available for their creative thinking. We don’t just give creative thinking to the AI, but we actually use the AI to make space for our own creative thinking. I also believe that what is very interesting is I have a very visual brain. In my mind, there are always images of what I envision for the future—whether as a product or an idea. Tools like AI image generators can bridge this gap between the images in my brain and showing other people those images. I think that’s a very powerful way to actually augment or enhance our capabilities. Ross : Just on that, though—you are an illustrator as well, correct? Valentina : Not really. What I’m now working on is a project where we create future scenarios. The narrative is very important, but at the same time, it’s difficult to understand what the future is if you cannot see it. I use these tools to generate images of the future—products, advertisements, or speculative design. That’s something I would have never been able to do without generative AI tools. It would have taken me years of learning a new skill to make these designs myself. With this, I just spent two hours chatting with the tool, and the images I wanted came out pretty much on their own. It’s really an incredible paradigm shift because you can acquire new skills without acquiring them. Ross : Yes, let’s dig into that AI-augmented foresight. Foresight is a discipline with many facets to how it’s done in a thorough way. Obviously, one element is being able to show people what those futures look like. But where are you seeing or applying tools to augment the foresight process? Valentina : It’s a topic that I started looking into about two years ago, when GPT-3.5 was out. I was always a bit annoyed that a process like generating scenarios for a company would take so much time. You needed to involve many different people, experts, and stakeholders. It was a bit frustrating because it’s also the reason why it gets done only once every 3, 4, or 5 years—not more often. In a world where everything changes so fast, doing it just every five years is not enough. I started experimenting with AI, and there are many methodologies where AI can play to its strengths. For example, a futures wheel, where you would normally need many people to come up with different perspectives on impacts and second-degree impacts. AI is good at looking at large amounts of data and finding connections. Humans are always filtered by their own bias—in a positive sense. We have our own baggage, education, and culture. AI, on the other hand, brings in a collective bias. It brings many perspectives, though it still depends on where the AI was developed and which data was used to train it. For example, the bias might lean more Western, Eastern, privileged, or otherwise. But that’s a specific part of the process where AI is extremely helpful. Of course, you cannot take out critical thinking from the human. AI is just a tool. The human in the loop evaluates the results with critical thinking, deciding if what AI produces is usable or complete nonsense. Ross : So, thinking about scenarios, one of the outcomes of a scenario process is broadly twofold. First, you have a set of scenarios that you can use to identify strategic options, test strategies, and explore other possibilities. Second, an important outcome is the changed thinking of those people who participated in the process. If you delegate too much of that to the AI, you just get the scenarios without the benefit of the changed thinking. Are there ways to use AI-augmented foresight so humans start to think more diversely through the process? Valentina : It’s always a matter of who you are involving and why you are creating the scenarios. What I find very interesting is when I create scenarios with someone who has never used AI for this kind of exercise. The realization is often, “Oh wow, I could have never come to this point by myself in a thousand years.” It’s true that the attention goes to the tool, but at the same time, people pay so much attention to the results because they’re a bit scared of the tool. This shift in thinking already starts happening. For example, I was working with a colleague who is extremely smart. She has a PhD in supercomputing and is an expert in technological innovation in the banking sector. She was amazed by the results that came out of our process. We generated future states based on technology, societal aspects, value creation, sustainability—all these topics. I did an experiment where I input this research into the AI and asked it to generate four diverse scenarios with opposing uncertainties, along with narrative personas and other details. The AI used templates I designed and took into account the time horizon we provided. My colleague was incredibly surprised by how specific and credible the AI’s choices were. So there are two aspects. First, you’re introducing people to a very powerful technology. Second, this amazement opens them up to the results and shifts their thinking. Ross : So the process changes their thinking because they see possibilities they hadn’t considered before. Valentina : Yes, exactly. What we also do is use another set of tools. We generate personas from the future, and in workshops, we let people interview these personas. The personas are AI agents—trained LLMs with specific knowledge about the future scenario and their own background. They know who they are, what their values and challenges are, and so on. Participants can actually have a conversation with these personas, just like they would interview a customer for a product. It’s immersive and less boring. Instead of just listening to me telling a story, they can ask their own questions and get answers. It’s very powerful because it engages people directly. They might ask, “What’s the weather like in Japan?” or, “Why are you still working at 11 PM?” The point is not that this persona is actually in the future, but the exercise itself makes the process feel real and relatable. Ross : Yes, that’s the experiential future—creating an image is one thing, but being able to have a conversation with someone living in the future is far more engaging. It can very easily shift thinking about what’s possible and how to respond to it. Valentina : Absolutely. Ross : More generally, you also refer to yourself as an AI-augmented human. I think we’ve already covered quite a few ways in which you do that. How else do you augment yourself? Valentina : Just looking at my daily life—I’m Italian, I work in Germany in an international company, and I married a French guy. So I use four different languages every day. Switching between languages takes a bit, so I use AI tools for translation when I need to sound proper, especially for work. I also use AI to kickstart new activities. For example, when I need to organize workshops, I have my boundary conditions: four hours, a specific topic, a goal, and a number of participants. I let AI draft an outline, and then I refine it iteratively. These are very basic things, but the amount of time you save with this jumpstart is impressive. On a more advanced level, I use wearables—a smartwatch, a smart ring, and a continuous glucose monitor. The data gets fed into AI tools to analyze my health, wellness, and next steps for longevity. Another example is art. I never thought I could be an artist, but now I’m learning to create AI-powered digital art. Refik Anadol’s work inspires me. I never thought this would be possible for me, but with AI tools, I’m learning and creating. I always saw myself as a multipotentialite—someone who can do many things. But I never had the time to develop all those skills. AI removes that barrier. It allows me to move from multipotentialite to polymath because it either does part of the learning for me or accelerates the process. That, for me, is a paradigm shift. Ross : I love the word multipotentialite. I believe in human potential—that we’re all capable of so much. We make choices and live one life out of the many we could have led. As you say, AI now allows us to express more of our potential in ways we never could before. Valentina : Totally. I also find the idea of creating a digital twin of yourself very interesting—an AI trained on your work, your thoughts, and your way of thinking. It’s a bit Black Mirror -esque, but imagine having a digital twin of Leonardo da Vinci. We don’t have that, unfortunately. I’m not the most interesting person to replicate, but I would love to see digital twins of the greatest minds. Ross : So you describe yourself as a techno-futurist. What are the wildest and most exciting possibilities you see when it comes to amplifying human cognition and potential? Valentina : My wildest dreams always include brain-computer interfaces—being able to connect your brain to a machine, not just in one direction, where your brain controls something, but also the other way around. It’s a bit like what The Matrix showed—plugging something into your brain and learning everything you need for a task in seconds. That would be my ultimate dream. A few weeks ago, I was in Japan at the R&D forum of NTT, the company I work for. There were projects that, when put together, showed amazing potential. One project demonstrated how brain waves could control a computer to execute tasks. Another project involved using generative AI to make an avatar dance in specific styles. If you combine these two projects, you could give a new kind of life to paraplegic people. Imagine a DJ who cannot move anymore—not even a mouse—but who can think. With these technologies, they could “dance” again, not physically but virtually. It’s an empowering way of using technology. What might look like a small or unimportant project, like making an avatar dance, becomes a life-changing tool when applied in this way. I’m also a big fan of decentralization—making technology accessible to as many people as possible. It’s complicated and utopian in many ways. Decentralization isn’t always positive; it has its risks. But I believe many things could improve faster if decentralization were real, not just used for speculative purposes like NFTs. The thing about emerging technologies is that they seem far away until they suddenly arrive. Quantum computing, for example, has been “30 years away” for a long time, but AI was the same. Then suddenly, we had ChatGPT, DALL-E, MidJourney, and Stable Diffusion—all these tools that completely shifted what we thought was possible. What excites me is how quickly things are going to happen. It will come when we least expect it, and we’ll look back and say, “How did we not see this coming?” Ross : And what should we be doing to nudge these developments toward a positive future? You’ve mentioned dystopian futures—utopian might be hoping for too much—but what can we do to steer things in the right direction? Valentina : I think the most important thing is for people to be open to understanding technology before it’s imposed on them. We need to learn from the past. Could we have avoided some of the negative effects of social media if people had been more aware and open to understanding it earlier? I think so. A lot of people resist new technologies until they’re mainstream. By then, it’s often too late to influence how those technologies are implemented. If people started earlier—learning, experimenting, and being open to these tools—they could make more informed decisions. They could choose whether or not to use the technology before it becomes unavoidable. For me, education and open-mindedness are the most powerful tools we have right now. Ross : Yep, I agree. We need to be open to technology and engage with it. The more we use it, the more open-minded we become. It’s a virtuous cycle, but we need to start. Where can people go to find out more about you and your work? Valentina : Good question. Ross : Do you have a website? Valentina : Yeah, I have a website, but it’s not really up to date. I write sometimes on LinkedIn. The point is, sometimes I’m so busy learning new stuff and figuring out how to use it that I don’t take the time to talk about what I’m doing. So my website gets updated probably once a year, and my LinkedIn a bit more often than that. I’m also writing a chapter for an AI and ethics book that will be coming out next year. It’s definitely being published in Europe, but I’m not sure if it will be sold outside of Europe. I’m not so informed about the marketing plan for that. Otherwise, just contact me on LinkedIn. I’m doing this cool stuff not only for fun in my free time but also as part of my main job. Ross : Indeed. All right, excellent. Thank you so much for your time, your insights, and your fascinating work, Valentina. Valentina : Oh, thank you for taking the time. The post Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74) appeared first on Amplifying Cognition .…
“Not everyone can see with dragonfly eyes, but can we create tools that help enable people to see with dragonfly eyes?” – Anthea Roberts About Anthea Roberts Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization , was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous prestigious awards and has been named “The World’s Leading International Law Scholar” by the League of Scholars. Website: Dragonfly Thinking Anthea Roberts LinkedIn Profile: Anthea Robert University Profile: Anthea Roberts What you will learn Exploring the concept of dragonfly thinking Creating tools to see complex problems through many lenses Shifting roles from generator to director and editor with AI Understanding metacognition in human-AI collaboration Addressing cultural biases in large language models Applying structured analytic techniques to real-world decisions Navigating the cognitive industrial revolution with AI Episode Resources People Sam Bide Philip Tetlock Harrison Chase Companies/Organizations Dragonfly Thinking Australian National University Books Is International Law International? by Anthea Roberts Six Faces of Globalization by Anthea Roberts Technical Terms Structured analytic techniques Risk, reward, and resilience framework Large language models (LLMs) Agentic workflows Cognitive architecture Metacognition Reinforcement learning Super forecasting Wisdom of the silicon crowd Transcript Ross Dawson : Anthea, it is a delight to have you on the show. Anthea Roberts : Thank you very much for having me. Ross : So you have a very interesting company called Dragonfly Thinking, and I’d like to delve into that and dive deep. But first of all, I’d like to hear the backstory of how you came to see the idea and create the company. Anthea : Well, it’s probably an unusual route to creating a startup. I come with no technology background initially, and two years ago, if you told me I would start a tech startup, I would never have thought that was very likely—and no one around me would have, either. My other hat that I wear when I’m not doing the company is as a professor of global governance at the Australian National University and a repeat visiting professor at Harvard. I’ve traditionally worked on international law, global governance, and, more recently, economics, security, and pushback against globalization. I moved into a very interdisciplinary role, where I ended up doing a lot of work with different policymakers. Part of what I realized I was doing as I moved around these fields was creating something that the intelligence agencies call structured analytic techniques —techniques for understanding complex, ambiguous, evolving situations. For instance, in my last book, I used one technique to understand the pushback against economic globalization through six narratives—looking at a complex problem from multiple sides. Another was a risk, reward, and resilience framework to integrate perspectives and make decisions. All of this, though, I had done completely analog. Then the large language models came out. I was working with Sam Bide, a younger colleague who was more technically competent than I was. One day, he decided to teach one of my frameworks to ChatGPT. On a Saturday morning, he excitedly sent me a message saying, “That framework is really transferable!” I replied, “I made it to be really transferable.” He said, “No, no, it’s really transferable.” We started going back and forth on this. At the time, Sam was moving into policy, and he created a persona called “Robo Anthea.” He and other policymakers would ask Robo Anthea questions. It had my published academic scholarship, but also my unpublished work. At a very early stage, I had this confronting experience of having a digital twin. Some people asked, “Weren’t you horrified or worried about copyright infringement?” But I didn’t have that reaction. I thought it was amazingly interesting. What could happen if you took structured techniques and worked with this extraordinary form of cognition? It allowed us to apply these techniques to areas I knew nothing about. It also let me hand this skill off to other people. I leaned into it completely—on one condition: we changed the name from Robo Anthea to Dragonfly Thinking . It was both less creepy for me and a better metaphor. This way of seeing complex problems from many different sides is a dragonfly’s ability. I think I’m a dragonfly, but I believe there are many dragonflies out there. I wanted to create a platform for this kind of thinking—where dragonflies could “swarm” around and develop ideas together Ross : Just explain the dragonfly concept. Anthea : We took the concept from some work done by Philip Tetlock. When the CIA wanted to determine who was best at understanding complex problems, they found that traditional experts performed poorly. These experts tended to have one lens of analysis, which they overemphasized. This caused them to overlook some things and get blindsided by others. In contrast, Tetlock found a group of individuals who were much better forecasters. They were incredibly diverse and 70% better than traditional experts—35% better than the CIA itself, even without access to classified material. The one thing they had in common was that they saw the world through dragonfly eyes . Dragonfly eyes have thousands of lenses instead of one, allowing them to create an almost 360-degree view of reality. This predictive ability makes dragonflies some of the best predators in the world. These qualities—seeing through multiple lenses, integrating perspectives, and stress-testing—are exactly what we need for complex problems. We need to see problems from many lenses: different perspectives, disciplines, and cognitive approaches. We must integrate this into a cohesive understanding to make decisions. We need to stress-test it by thinking about complex systems, dynamics, and future scenarios, so we can act with foresight despite uncertainty. The AI part of this is critical because not everyone can see with dragonfly eyes. The question becomes: can we create tools to enable people to do so? Ross : There are so many things I’d like to dive into, but just to get the big picture: this is obviously human-AI collaboration. These are complex problems where humans have the fullest context and decision-making ability, complemented by AI. What does that interface look like? How do humans develop the skills to use AI effectively? Anthea : I think this is one of the most interesting and evolving questions. In the kind of complex cognition we deal with, we aim to co-create with the LLMs as partners. What I’ve noticed is that you shift roles. Instead of being the primary generator , you become the director or manager , deciding how you want the LLM to operate. You also take on a role as an editor or co-editor, moving back and forth. This means humans stay in the loop but in a different way. Another important aspect is recognizing where humans and AI excel. Not everyone is good at identifying when they’re better at a task versus when the AI is. For instance, AI can hold a level of cognitive complexity that humans often cannot. In our risk, reward, and resilience framework, humans may overfocus on risk or reward. Some can hold the drivers of risk, reward, and resilience but can’t manage the interconnections. AI can offload some of this cognitive load. The key is creating an interface that lets you focus on specific elements, cognitively “offload” them, and continue building. This is just a partial clean-up to give you a sense of the process. Let me know if you’d like me to continue or make further adjustments! Anthea : That’s not easy to do with a basic chat interface, for example. This is why I think the way we interact with LLMs—and the UI/UX—will evolve significantly. It’s about figuring out when the AI leads, when you lead, and how you co-create. Something like GPT’s Canvas mode is a great example. It allows real-time editing and co-creation of individual sentences, which feels like a glimpse into where this technology is heading. Ross : Yes, I completely agree on the metacognition aspect. That’s becoming central to my work—seeing your own cognition and recognizing the AI’s cognition as well. You need to pull back, observe the systemic cognition between humans and AI, and figure out how to allocate tasks effectively. Anthea : I completely agree. Over the last year and a half, I’ve realized that almost all of my work is metacognitive. I rarely tell people what to think, but I have an ability to analyze how people think—how groups think, what paradigms they operate in, and where disagreements occur at higher levels of abstraction. It turns out those second- and third-order abstractions about how to think are exactly what we can teach into these models and apply across many areas. Initially, I thought I was just applying my own metacognitive approaches on top of the AI. Now I realize I also need a deep understanding of what’s happening inside the models themselves. For instance, agentic workflows can introduce biases or particular ways of operating. You need cognitive awareness not just of your relationship with the AI but also of how the model itself operates. Another challenge is managing the sheer volume of output from the AI. There’s often a deluge of information, and you have to practice discernment to avoid being overwhelmed. Now, I’m also starting to think about how to simplify these tools so that people with different levels of cognitive complexity can easily access and use them. That’s where a product manager would come in—to streamline what I do and make it less intimidating for others. If you combine this with interdisciplinary agents—looking at problems from different perspectives and working with experts—it’s metacognition layered on metacognition. I think this will be one of the defining challenges of our time: how we process this complexity without becoming overwhelmed or outsourcing too much of our thinking. Ross : Yes, absolutely. As a startup, you do have to choose your audiences carefully. Focusing on highly complex problems makes sense because the value is so high, and it’s an underserved market. On that note, I’m curious about the interfaces. Are you incorporating visual elements? Or is it primarily text-based, step-by-step interactions? Anthea : I tend to be a highly visual and metaphorical thinker, so I’m drawn to visuals to help with this. Visual representations can often capture complex concepts more intuitively and succinctly than words. We’re currently experimenting with ways to visually represent concepts like complex systems diagrams, interventions, causes, consequences, and effects. I also think the idea of artifacts is crucial. You see this with tools like Claude, Canvas, and others. It’s about moving beyond a chat interface and creating something that can store, build upon, and expand ideas over time. Another idea I’m exploring is “daemons” or personas—AI agents that act like specialists sitting on your shoulder. You could invoke an economics expert, a political science expert, or even a writing coach to give you critiques or perspectives. This leads to new challenges, like saving and version control when collaborating not just with an AI but with other humans and their AIs. These are open questions, but I expect significant progress in the next few years as we move beyond the dominance of chat interfaces. Ross : Harrison Chase, CEO of LangChain, talks about cognitive architectures , which I think aligns perfectly with what you’re doing. You’re creating systems where human and AI agents work together to enhance cognition. Anthea : Exactly. I read a paper recently on metacognition—on knowing when humans make better decisions versus when the AI does. It showed that humans often made poor decisions about when to intervene, while the AI did better when deciding whether to involve humans. That’s fascinating and shows how much work we need to do on understanding these architectures. Ross : Are there any specific cognitive architecture archetypes you’re exploring or see potential in? Anthea : I haven’t made as much progress on that yet, beyond observing the shift from humans being primary generators to directors and editors . One thing I’ve been thinking about is how our culture celebrates certain roles—like the athlete on the field, the actor on stage, or the writer—while undervaluing the coach, director, or editor. With AI, we’re moving into a world where the AI takes on those celebrated roles, and we become the coach, director, or editor. For instance, if you were creating an AI agent to represent a famous athlete, you wouldn’t ask the athlete to articulate their skills—they often can’t. You’d ask the coach. Yet, culturally, we valorize the athlete, not the coach. This redistribution of roles will be fascinating to watch. Similarly, we’ve historically overvalued STEM knowledge compared to the humanities and social sciences. Now we’re seeing a shift where those disciplines—like philosophy and argumentation—become crucial in the AI age. Ross : Yes, absolutely. The framing and broader context are where humans shine, especially when AI has inherent limitations despite its generative capabilities. Anthea : Exactly. AI models are generative, but they’re ultimately limited and contained. Humans bring the broader perspective, but we also get tired and cranky in ways the models don’t. Ross : Earlier, you mentioned intelligence agencies as a core audience. How do their needs differ in terms of delivering these interfaces? Anthea : We’re still in the early stages, with pilots launching early next year. I’ve worked with government agencies for a long time, so I know there are differences. AI adoption in institutions is much slower than the technology itself. Governments and big enterprises are risk-averse, concerned about safety, transparency, and bias. For intelligence agencies, I expect they’ll want models that are fully disconnected from the internet, with heightened security requirements. I’m also fascinated by the Western and English-language biases in current frontier models. Down the track, I’d like to explore Chinese, Arabic, and French models to understand how different training data and reinforcement learning influence outcomes. This could enhance cross-cultural diplomacy, intelligence, and understanding. We’re already seeing ideas like wisdom of the silicon crowd , where multiple models are combined for better predictions. But I think it’s not just about combining models—it’s about embracing their diverse cultural perspectives. Ross : Yes, and I’ve seen papers on the biases in LLMs based on language and cultural training data. That’s such a fascinating and underexplored area. Anthea : Absolutely. The first book I wrote, Is International Law International? , explored how international law isn’t uniform. Lawyers in China, Russia, and the US operate with different languages, universities, and assumptions. We’re going to see the same thing with LLMs. Western and Chinese models may each have their own bell curves, but they’ll be very different. It’s a dynamic we haven’t fully grappled with yet. Ross : And that interplay between polarization and convergence will be key. Anthea : Exactly. Social media polarizes, creating barbells—hollowing out the middle and amplifying extremes. In contrast, LLMs tend to squash toward a bell curve, centering on the median. Within a language area, LLMs can be anti-polarizing . But between language-based models, we’ll see significant polarization—different bell curves reinforcing different realities. Understanding this interplay will be critical as we move forward. Ross : This has been an incredible conversation, Anthea. What excites you most about the future—whether in your company, your work, or the world at large? Anthea : I’ve fallen completely down the AI rabbit hole. As someone without a tech background, I now find myself reading AI papers constantly—it’s like a new enlightenment or cognitive industrial revolution. The speed, scale, and cognitive extension AI enables are extraordinary. I feel like I’m living through a transformative moment that will redefine education, research, and so many other fields. It’s exciting, turbulent, and challenging—but I just can’t look away. Ross : I couldn’t agree more. It’s a privilege to be alive at this moment, experiencing what it means to think and be human in an age of such transformation. Thank you for everything you’re doing, Anthea. Anthea : Thank you for having me. The post Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73) appeared first on Amplifying Cognition .…
“To be a flexible leader is to make sense of the world in a way that allows you to intentionally ask, ‘How do I need to lead in this moment to get the best results for my team and the outcomes we need?’” – Kevin Eikenberry About Kevin Eikenberry Kevin Eikenberry is Chief Potential Officer of leadership and learning consulting company The Kevin Eikenberry Group . He is the bestselling author or co-author of 7 books, including the forthcoming Flexible Leadership . He has been named to many lists of top leaders, including twice to Inc. magazine’s Top 100 Leadership and Management Experts in the World. His podcast, The Remarkable Leadership Podcast, has listeners in over 90 countries. Website: The Kevin Eikenberry Group LinkedIn Profiles Kevin Eikenberry The Kevin Eikenberry Group Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence What you will learn Understanding the essence of flexible leadership Balancing consistency and adaptability in decision-making Embracing “both/and thinking” to navigate complexity Exploring the power of context in leadership strategies Mastering the art of asking vs. telling Building habits of reflection and intentionality Developing mental fitness for effective leadership Episode Resources People Carl Jung F. Scott Fitzgerald David Snowden Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence Frameworks/Concepts Myers-Briggs Cynefin framework Confidence-competence loop Organizations/Companies The Kevin Eikenberry Group Technical Terms Leadership style “Both/and thinking” Compliance vs. commitment Ask vs. tell Command and control Sense-making Plausible cause analysis Transcript Ross Dawson: Kevin, it is wonderful to have you on the show. Kevin Eikenberry: Ross, it’s a pleasure to be with you. I’ve had conversations about this book for podcasts. This is the first one that’s going to go live to the world, so I’m excited about that. Ross: Fantastic. So the book is Flexible Leadership: Navigate Uncertainty and Lead with Confidence. What does flexible leadership mean? Kevin: Well, that’s a pretty good starting question. Here’s the big idea, Ross: so many people have come up in leadership and taken assessments of one sort or another. They’ve done Strengths Finder or a leadership style assessment, and it’s determined that they are a certain style or type. That’s useful to a point, but it becomes problematic beyond that. Humans are pattern recognizers, so once we label ourselves as a certain type of leader, we tend to stick to that label. We start thinking, “This is how I’m supposed to lead.” To be a flexible leader means we need to start by understanding the context of the situation. Context determines how we ought to lead in a given moment rather than relying solely on what comes naturally to us. Being a flexible leader involves making sense of the world intentionally and asking, “How do I need to lead in this moment to get the best results for my team and the outcomes we’re working towards?” Ross: I was once told that Carl Jung, who wrote the typology of personalities that forms the foundation of Myers-Briggs, said something similar. I’ve never found the original source, but apparently, he believed the goal was not to fix ourselves at one point on a spectrum but to be as flexible as possible across it. So, we’re all extroverts and introverts, sensors and intuitors, thinkers and feelers. Kevin: Exactly. None of us are entirely one or the other on these spectrums. They’re more like continuums. Take introvert vs. extrovert. Some people are at one extreme or the other, but no one is a zero on either side. The problem arises when we label ourselves and think, “This is who I am.” That may reflect your natural tendency, but it doesn’t mean that’s the only way you can or should lead. Ross: One of the themes in your book is “both/and thinking,” which echoes what I wrote in Thriving on Overload. You can be both extroverted and introverted. I see that in myself. Kevin: Me too. Our world is so focused on “either/or” thinking, but to navigate complexity and uncertainty as leaders, we must embrace “both/and” thinking. Scott Fitzgerald once said something along the lines of, “The test of a first-rate intelligence is the ability to hold two opposing ideas in your mind at the same time and still function.” I’d say the same applies to leadership. To be highly effective, leaders must consider seemingly opposite approaches and determine what works best given the context. Ross: That makes sense. Most people would agree that flexible leadership is a sound idea. But how do we actually get there? How does someone become a more flexible leader? Kevin: The first step is recognizing the value of flexibility. Many leaders get stuck on the idea of consistency. They think, “To be effective, I need to be consistent so people know what to expect from me.” But flexibility isn’t the opposite of consistency. We can be consistent in our foundational principles—our values, mission, and core beliefs—while being adaptable in how we approach different situations. Becoming a flexible leader requires three things: Intention – Recognizing the value of flexibility. Sense-making – Understanding the context and what it requires of us. Flexors – Knowing the options available to us and deciding how to adapt in a given situation. Ross: This aligns with my work on real-time strategy. A fixed strategy might have worked in the past, but in today’s world, we need to adapt. At the same time, being completely flexible can lead to chaos. Kevin: Exactly. Leaders need to balance consistency and flexibility, knowing when to lean toward one or the other. Leadership is about achieving valuable outcomes with and through others. This creates an inherent tension—outcomes vs. people. The answer isn’t one or the other; it’s both. For every “flexor” in the book, the goal isn’t to be at one extreme of the spectrum but to find the balance that best serves the team and the context. Ross: You’ve mentioned the word “flexor” a few times now. I think this is one of the real strengths of the book. It’s a really useful concept. So, what is a flexor? Kevin: A flexor is the two ends of a continuum on something that matters. Let’s use an example. On one end, we have achieving valuable outcomes. On the other end, we have taking care of people. Some leaders lean toward focusing on outcomes—getting the work done no matter what. Others lean toward prioritizing their people—ensuring their well-being and development so outcomes follow. The reality is that leadership requires balancing both. Sometimes the context calls for one approach more than the other. For instance, in moments of chaos, compliance might be necessary to maintain safety or order. In other situations, you’ll need to inspire commitment for long-term success. A leader must constantly assess the context and decide where to lean on the spectrum. Ross: That’s a great example. Another one might be between “ask” and “tell.” Kevin: Yes, exactly! Leaders often believe they need to have all the answers, so they default to telling—giving directives and expecting people to follow. But sometimes, asking is far more effective. Your team members often have perspectives and information you don’t. By asking rather than telling, you gain insights, foster collaboration, and build trust. Of course, it’s not about always asking or always telling. It’s about understanding when to lean toward one and when the other might be more effective. Ross: That makes sense. In today’s world, consultative leadership is highly valued, especially in certain industries. Many great leaders lean heavily on asking rather than telling. Kevin: Absolutely, but even consultative leaders need to recognize when the situation calls for decisiveness. If there’s urgency or a crisis, sometimes the team just needs clear instructions: “Here’s what we need to do.” Being a flexible leader means being intentional—understanding the context and adjusting your approach, even if it doesn’t align with your natural tendencies. Ross: That brings us to the concept of sense-making. Leaders need to make sense of their context to decide where they stand on a particular flexor. How can leaders improve their sense-making capabilities? Kevin: The first step is recognizing that context matters and that it changes. Many leaders rely on best practices, but those only work in clear, predictable situations. Our world is increasingly complex and uncertain. In such situations, we need to adopt “good enough” practices or experiment to find what works. To improve sense-making, leaders must build a mental map of their world. Is the situation clear, complicated, complex, or chaotic? This aligns with David Snowden’s Cynefin framework, which I reference in the book. By identifying the nature of the situation, leaders can adjust their approach accordingly. Ross: The Cynefin framework is a fantastic tool, often used in group settings. You’re applying it here to individual leadership. Kevin: Exactly. It’s not just about guiding group processes. It’s about helping leaders see the situation clearly so they can flex their approach. Ross: That’s insightful. Leaders don’t operate in isolation—they’re part of an organizational context. How does a leader navigate their role while considering the expectations of their peers, colleagues, and supervisors? Kevin: Relationships play a critical role. The better your relationships with peers and supervisors, the more you understand their styles and perspectives. This helps you navigate the context effectively. Sometimes, though, you may need to challenge others’ perspectives—respectfully, of course. If someone is treating a situation as chaotic when it’s actually complex, your role as a leader may be to ask questions or provide a different perspective. Being intentional is key. Leadership often involves breaking habitual responses, pausing to assess the context, and deciding if a different approach is needed. Ross: That’s a journey. Leadership habits are deeply ingrained. How do leaders move from their current state to becoming more flexible and adaptive? Kevin: That’s the focus of the third part of the book—how to change your habits. First, leaders need to recognize that their natural tendencies might not always serve them best. Without this realization, no progress is possible. Next, they must build new habits, starting with regularly asking questions like: What’s the context here? What does this situation require of me? How did that approach work? Reflection is crucial. Leaders should consistently ask, “What worked, what didn’t, and what can I learn from this?” Another valuable practice is what I call “plausible cause analysis.” Instead of jumping to conclusions about why something happened, consider multiple plausible explanations. For example, if a team doesn’t respond to a question, don’t assume they’re disengaged. There could be several reasons—perhaps they need more time to think or the question wasn’t clear. By exploring plausible causes, leaders can choose responses that address most potential issues. Ross: That’s a great framework for reflection and improvement. It also ties into mental fitness, which is so important for leaders. Kevin: Exactly. During the pandemic, we worked extensively with clients on mental fitness—not just mental health. Mental fitness involves proactively building resilience, much like physical fitness. Reflection, gratitude, and self-awareness are all part of maintaining mental fitness. Leaders who invest in their mental fitness are better equipped to handle challenges and make sound decisions. Ross: Let’s circle back to the book. What would you say is its ultimate goal? Kevin: The goal of Flexible Leadership is to help leaders navigate uncertainty and complexity with confidence. For 70 years, leadership models have tried to simplify the real world. While those models are helpful, they’re inherently oversimplified. The ideas in the book aim to help leaders embrace the complexity of the real world, equipping them with tools to become more effective and, ultimately, wiser. Ross: Fantastic. Where can people find your book? Kevin: The book launches in March, but you can pre-order it now at kevineikenberry.com/flexible . That link will take you directly to Amazon. You can also learn more about our work at kevineikenberry.com . Ross, it’s been an absolute pleasure. Thanks for having me. Ross: Thank you so much, Kevin! The post Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72) appeared first on Amplifying Cognition .…
“It’s not just about the AI itself; it’s about the way we deploy it. We need to focus on human-centric practices to ensure AI enhances human potential rather than harming it.” – Alexandra Diening About Alexandra Diening Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis . Website: Human-AI Symbiosis LinkedIn Profiles Alexandra Diening Human-AI Symbiosis Alliance Book A Strategy for Human-AI Symbiosis What you will learn Exploring the concept of human-AI symbiosis Recognizing the risks of parasitic AI Bridging neuroscience and artificial intelligence Designing ethical frameworks for AI deployment Balancing excitement and caution in AI adoption Understanding AI’s impact on individuals and organizations Leveraging practical strategies for mutualistic AI development Episode Resources Organizations and Alliances Human AI Symbiosis Alliance Fortune 500 companies Books A Strategy for Human AI Symbiosis Technical Terms Human-AI symbiosis Generative AI Cognitive sciences Cyber psychology Neuroscience AI avatars Algorithmic bias Responsible AI Symbiotic AI Transcript Ross Dawson: Alexandra, it’s a delight to have you on the show. Alexandra Diening: Thank you for having me, Ross. Very happy to be here. Ross: So you’ve recently established the Human AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey? Alexandra: It’s a long journey, but I’ll try to make it short and quite interesting. I entered the world of AI almost two decades ago, and it was through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Of course, if you want to process all the neuroscience data, you can’t do it alone. Inevitably, you need to incorporate AI. This was my gateway to AI through neuroscience. At the time, there weren’t many people working on this type of AI, so the industry naturally pulled me in. I transitioned to working on business applications of AI, progressively moving from neuroscience to AI deployment within business contexts. I worked with Fortune 500 companies across life sciences, retail, finance, and more. That was the first chapter of my entry into the world of AI. When deploying AI in real business scenarios, patterns start to emerge. Sometimes you succeed; sometimes you fail. What I noticed was that when we succeeded and delivered long-term tangible business value, it was often due to a strong emphasis on human-centricity. This focus came naturally to me, given my background in cognitive sciences. This emphasis became even more critical with the emergence of generative AI. Suddenly, AI was no longer just a background technology crunching data and influencing decisions behind the scenes. It became something we could interact with using natural language. AI started capturing emotions, building relationships, and augmenting our capabilities, emerging as a kind of social, technological actor. This led to our hypothesis that generative AI is the first technology with a natural propensity to build symbiotic relationships with humans. Unlike traditional technologies, there is mutual interaction. While “symbiosis” may sound romantic, it can manifest across a spectrum of outcomes, from positive (mutualistic) to negative (parasitic). In business, I started to see the emergence of parasitic AI—AI that benefits at the detriment of humans or organizations. This realization began to trouble me deeply. While I was working for multi-billion-dollar tech companies, I advocated for Responsible AI and human-centric practices. However, I realized the impact I could have was limited if this remained a secondary concern in corporate agendas. This led to the establishment of the Human AI Symbiosis Alliance. Its mission is to educate people about the risks of parasitic AI and to guide organizations in steering AI development toward mutualistic outcomes. Ross: That’s… well, there’s a lot to dig into there. I look forward to delving into it. You referred to being human-centric, and I think you seem to be a very human-centric person. One point that stood out was the idea of generative AI’s propensity for symbiosis. Hopefully, we can return to that. But first, you did your Ph.D. in cyber psychology, I believe. What is cyber psychology, and what did you learn? Alexandra: Cyber psychology, when I started, was quite unconventional and still is to some degree. It combines psychology, medical neuroscience, human-computer interaction, marketing science, and technology. The focus is on how human interaction and behavior change within digital environments. In my case, it was AI-powered digital environments, like social media and AI avatars. Part of my research examined how long-term exposure to these environments impacts behavior, emotions, and even biology. For example, interacting with AI-powered technologies over time can alter brain connectivity and structure. The goal was to identify patterns and, most importantly, help tech companies design technologies that uplift human potential rather than harm it. Ross: Today, we are deeply immersed in digital environments and interacting with human-like systems. You mentioned the importance of fostering positive symbiosis. This involves designing both the systems and human behavior. What are the leverage points to achieve a constructive symbiosis between humans and AI? Alexandra: The most important realization is that AI itself isn’t a living entity. It lacks consciousness, intent, and agency. The focus should be on our actions—how we design and deploy AI. While it’s vital to address biases in AI data and ensure proper guardrails, the real danger lies in how AI is deployed. Deployment literacy is key. Many tech companies treat AI like traditional software, but AI requires a completely different lifecycle, expertise, and processes. Awareness and education about this distinction are essential. Beyond education, we need frameworks to guide deployment. Companies must not only enhance employee efficiency but also ensure that skills aren’t eroded over time, turning employees into efficient yet unskilled workers. Measurement is another critical aspect. Traditional success metrics like productivity and efficiency are insufficient for AI. Companies must consider innovation indices, employee well-being, and brand relationships. AI’s impact needs to be evaluated with a long-term perspective. Finally, there are unprecedented risks with AI. For example, recent events, like a teenager tragically taking their life after interacting with an AI chatbot, highlight the dangers. Companies must be aware of these risks and prioritize expertise, architecture, and metrics that steer AI deployment away from parasitism. Ross: One of the things I understand you’re launching is the Human AI Symbiosis Bible. What is it, what does it look like, and how can people use it to put these ideas into practice? Alexandra: The “Human AI Symbiosis Bible” is officially titled A Strategy for Human AI Symbiosis . It’s already available on Amazon, and we’re actively promoting it. The book acts as a guide for stakeholders in the AI space, transitioning them from traditional software development practices to AI-specific strategies. The content is practical and hands-on, tailored to leaders, designers, engineers, and regulators. It starts with foundational concepts about human-AI symbiosis and its importance. Then it provides frameworks and processes for avoiding common pitfalls. What sets it apart is its practicality. It’s not a theoretical book that simply outlines risks and concepts. We include over 70 case studies from Fortune 500 companies, showcasing real-world examples of AI failures and successes. These case studies highlight lessons learned so readers can avoid repeating the same mistakes. We also had 150 contributors, including 120 industry practitioners directly involved in building and deploying AI. The book synthesizes their insights and experiences, offering actionable guidance rather than prescribing a single “correct” way to develop and deploy AI. It’s a resource to help leaders ask the right questions, make informed decisions, and prepare for what we call the AI game. Ross: Of course, everything you’re describing is around a corporate or organizational context—how AI is applied in organizations. You suggest that every aspect of AI adoption should align with the human-AI symbiosis framework. Alexandra: Absolutely. The message is clear: organizations must go beyond viewing AI as merely a technological or data exercise. They need to understand its profound effects on the human factor—both employees and customers. As we’ve discussed, generative AI inherently influences human behavior. Organizations must decide how they want this symbiosis to manifest. Do they want AI to augment human potential and drive mutual benefits, or allow parasitic patterns to emerge, harming individuals and the organization in the long term? Ross: You and I might immediately grasp the concept of human-AI symbiosis, but when you present this in a corporate boardroom, some people might be puzzled or even resistant. How do you communicate these ideas effectively to business leaders? Alexandra: It’s essential to avoid letting the conversation become too fluffy or esoteric. When introducing human-AI symbiosis, we frame the discussion around a tangible enemy: parasitic AI. No company wants to invest time, money, and resources into deploying AI only to have it harm their organization. We start by defining parasitic AI and sharing quantified use cases, including financial costs and operational impacts. This approach grounds the conversation in real-world stakes. From there, we guide leaders through identifying parasitic patterns in their organization and preventing them. By addressing the risks, we create space for mutualistic AI to thrive. This framing—focusing on preventing harm—proves very effective in getting leaders engaged and invested. Ross: What you’re describing seems to extend beyond individual human-AI interactions to an organizational level—symbiosis between AI and the entire organization. Is it one or the other, or both? Alexandra: It’s both. On the individual level, if you enhance an employee’s productivity but they become disengaged or leave the organization, it ultimately harms the company. Similarly, if employees become more efficient but lose critical skills over time, the company’s ability to innovate is compromised. The connection between individual outcomes and organizational success is inseparable. Organizations must consider how AI impacts employees on a personal level and translate those effects into broader business objectives like resilience, innovation, and long-term sustainability. Ross: It’s been almost two years since the “ChatGPT moment” that changed how many view AI. As AI capabilities continue to evolve rapidly, what are the most critical leverage points to drive the shift toward human-AI symbiosis? Alexandra: It starts with literacy and awareness. Leaders, innovators, and engineers must understand that AI is fundamentally different from traditional software. The old ways of working don’t apply anymore, and clinging to them will lead to mistakes. Education is the first pillar, but it must be followed by practical tools and frameworks. People need guidance on what to do and how to do it. Case studies are crucial here—they provide real-world examples of both successes and failures, demonstrating what works and what doesn’t. Lastly, we need regulatory guardrails. I often use the analogy of a driving license. You wouldn’t let someone drive a car without proper training and certification, yet we have people deploying AI systems without sufficient expertise. Regulation must define minimum requirements for AI deployment to prevent harm. Ross: That ties into people’s attitudes toward AI. Surveys often show mixed feelings—excitement and nervousness. In an organizational context, how do you navigate this spectrum of emotions to foster transformation? Alexandra: The key is to meet people where they are, whether they’re excited or scared. Listen to their concerns and validate their perspectives. Neuroscience tells us that most decisions are driven by emotion, so understanding emotional responses is critical. The goal is to balance excitement and caution. Pure excitement can lead to reckless adoption of AI for its own sake, while excessive fear can result in resistance or harmful practices, like shadow AI usage by employees. Encouraging a middle ground—both excited and cautious—creates a productive mindset for decision-making. Ross: That’s a great way to frame it—balancing excitement with due caution. So, as a final thought, what advice would you give to leaders implementing AI? Alexandra: First, educate your teams. Don’t pursue AI just because it’s trendy or looks good. Many AI proofs of concept never reach production, and some shouldn’t even get that far. Understand what you’re getting into and why. Second, ensure you have the right expertise. There are many self-proclaimed AI experts, but true expertise comes from long-term experience. Verify credentials and include at least one seasoned expert in your team. Third, go beyond technology and data. Focus on human factors, ethics, and responsible AI. Consider how AI will impact employees, customers, and society at large. Fourth, establish meaningful metrics. Productivity and efficiency are important, but so are innovation, employee well-being, and long-term brand value. Measure what truly matters for your organization. Finally, get a third-party review. Independent assessments can spot parasitic patterns early and help course-correct. It’s a small investment for significant protection. Ross: That’s excellent advice. Identifying parasitic AI requires awareness and understanding, and your framing is incredibly valuable. How can people learn more about your work? Alexandra: Visit our website at h-aisa.com . We publish resources, case studies, expert interviews, and event details. You can also find our book, A Strategy for Human AI Symbiosis , on Amazon or through our site. We’re actively engaging with universities, conferences, NGOs, and media to spread awareness. We’ll also host an event in Q1 2025. For updates, follow us on LinkedIn and join the Human AI Symbiosis Alliance group. Ross: Fantastic. We’ll include links to your resources in the show notes. Thank you for sharing your insights and for your work in advancing human-AI symbiosis. It’s an essential and positive framework for organizations to adopt. Alexandra: Thank you, Ross. It was a pleasure. The post Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71) appeared first on Amplifying Cognition .…
“What these tools allow you to do is very, very quickly go from an idea to sort of an 80% manifestation of it. It’s not just about the technology—it’s about understanding how, when, and why to use it to unlock collective intelligence.” – Kyle Shannon “We’ve discovered you can externalize the voice in your head into something you can have a dialogue with, creating reflective moments that result in documentation, not fleeting thoughts. That’s transformative.” – Kevin Clark About Kevin Clark & Kyle Shannon Kevin Clark is the President and Federation Leader of Content Evolution, a global consulting ecosystem working in brand, customer experience, business strategy and transformation. He previously worked for IBM as Program Director, Brand & Values Experience. He is on the board of numerous companies and is the author of numerous articles, book chapters, and books including Brandscendence . Kyle Shannon is Founder & CEO of video production company Storyvine, Founder of collaborative community the AI Salon, and Chief Generative Officer of Content Evolution. Previous roles include as EVP Creative Strategy at The Distillery and Co-Founder of Agency.com . Websites: www.contentevolution.net www.thesalon.ai LinkedIn Profiles Kevin Clark Kyle Shannon Book Collective Intelligence in the Age of AI What you will learn Exploring the power of digital twins in collaboration Overcoming creative blocks with generative AI tools Asking better questions to unlock AI’s potential Designing structured interviews for personalized AI Understanding collective intelligence in the digital age Rapid prototyping to test and refine ideas quickly Reshaping industries with untapped organizational data Episode Resources Emily Shaw Aristotle Steve Jobs Content Evolution CoLab Storyvine AI Salon Fortune 500 Gartner Digital twins Generative AI Large Language Models (LLMs) GPT Notebook LM Transformer architecture Data collaboratives Books, Shows, and Titles Collective Intelligence and AI Candy Ears The Hitchhiker’s Guide to the Galaxy Transcript Ross Dawson : Ross, Kevin, and Kyle, wonderful to have you on the show. Kevin Clark : Pleasure to be here. Kyle Shannon : Ross, great to be here. Ross : So, you created a book recently called Collective Intelligence and AI . I’d like to pull back to the big picture of where this fits into what you’re doing. This organization is called Content Evolution. How did you get to this place of creating this book and the other things you are doing using AI to assist in your work? Kevin : Well, Content Evolution itself is a federation of companies that are aligned. We’re all thoughtful leaders and innovators and have been at it for 23 years now. This technology is helping us pull the thread forward a lot faster. As Kyle will describe in a moment, we have almost 30 digital agents—or what we call digital advisors—of ourselves. As a result, we have a collective of those, and we can all write together. We’ve published articles and done all kinds of things. This book is a particular expression between the two of us because we’ve been talking to each other for over a decade. It’s the residue of a decade’s worth of weekly conversations. There’s more to it—Kyle, say more. Kyle : When we started, we put together a group within Content Evolution called CoLab. The initial idea was, “Hey, this AI stuff is happening.” We started this probably a year and a half ago, almost two years ago. Generative AI was clearly evolving rapidly, so it felt important to explore. Like with all new technologies, you start with the tools, but very quickly, you ask, “Why? What are we trying to accomplish?” Content Evolution is an organization that’s a couple of decades old. One challenge was figuring out who’s in it and what talents exist within it. Initially, we asked, “Could we create a tool using generative AI to help someone discover the right person for a business problem?” That’s how it started. Over time, we realized we could create digital representations of ourselves—digital twins or digital advisors—that people could interact with 24/7. Even if Kevin wasn’t available, you could get his point of view. We’ve built 30 of these digital twins. They’re all in a single entity, a single GPT, where we can query them for the Content Evolution perspective on a topic. Individuals within that group can also comment on outputs. A big part of what we’re exploring now is understanding how, when, and why to use these tools. That’s far more fascinating than just the technology itself. Kevin : By the way, Kyle is the world’s first Chief Generative Officer. We didn’t put AI in the title because being generative is more important than the specific technologies you use. It’s about the practices, methodologies, and discernment of when to apply them—and sometimes, when to set them aside. We’ve discovered you can overcome writer’s block quickly by having a prompted start for something you’re thinking about. We’re also learning to externalize the voice in our heads into something we can have a dialogue with. This creates reflective moments and produces documentation rather than fleeting thoughts. Fascinating, isn’t it? Ross : Absolutely. The title Chief Generative Officer feels more appropriate, given the context. AI is just a set of tools. Kyle : Exactly. You can generate content with the tools or on your own. It could even be a hybrid. You can also generate revenue or other outcomes. The generative aspect goes beyond just the tools. Ross : The questions you raised are exactly the kinds of questions I wanted to ask. Starting with the basics, how are these digital twins set up? Are they based on system prompts or custom instructions for commercial LLMs? Kyle : Right now, they’re custom GPTs, but we’ve experimented with other platforms like Poe and Claude. Initially, we wanted to scrape LinkedIn profiles to discover expertise within Content Evolution. But we realized a LinkedIn profile is a very thin, historical slice of who someone is. It doesn’t reflect how they talk, think, or solve problems. We designed a structured interview with 27 questions across various categories. This interview digs into who someone is today, their inspirations, problem-solving approaches, worldview, and more. The answers to these questions form the foundational data for a custom GPT with a tailored prompt. Ross : So, for someone in your network, do you conduct a voice or text interview for these questions? Kevin : That’s a great question because there’s a difference. Kyle : We learned that when people wrote their responses in text, their digital twins turned out horrible—just bad. People don’t write the same way they talk. We now conduct video interviews where we go through the structured questions interactively. As the interviewer, if I notice someone hasn’t gone deep enough or gets excited about something but cuts themselves off, I’ll ask them to expand. Once we made this interactive, the digital twins came to life. Kevin : It takes about 45 minutes to complete the interview. The questions are designed to be unusual, going beyond superficial answers. People are often surprised by the depth of the questions. Kyle : One of my favorites, which was developed by Joke Gamble, is: “Describe your career in three acts.” It frames the career as a journey or drama, putting you in a different mental space. The quality of the questions is everything. Kevin : Exactly. The quality of the question determines the quality of the answer from a large language model. At Content Evolution, our original tagline was “Be Intentional.” For 20 years, we’ve challenged our clients to ask better questions. That’s what we’ve been practicing all along. Kyle : Asking better questions is the core of being a good prompt engineer. It’s about having expertise but also being able to communicate across disciplines. Our team members have this cross-disciplinary ability, which makes us well-suited to leverage this technology. Ross : That’s a key point. Even though the answers from LLMs are improving, the most important thing remains the question. It reminds me of The Hitchhiker’s Guide to the Galaxy —you may know the answer, but asking the right question is crucial. Kyle : Exactly. Inside the Heart of Gold with the improbability engine, you never know what’ll come up. Kevin : Right. I’d also argue that this technology is redeeming the liberal arts degree. It enables specialization across disciplines, encouraging lifelong learners to embrace a generalist perspective. It’s about knowing how to organize and synthesize human knowledge. Ross : Absolutely. Humans excel at synthesis, and now we have access to diverse ideas that nurture that capability. From the structured interviews, how do you translate the data into a GPT? Kyle : We made strategic decisions for our official Content Evolution digital advisors. All of them share the same structural data: the interview forms the core, and every twin has the same system prompt. If we update the core prompt, it applies to all of them. The collection of 30 twins also has its own prompt. Some members have created duplicates of their twins and added their writings, articles, books, and papers. These are different types of GPTs—one captures the person’s essence, and the other their body of work. It’s fascinating because the core data makes the twins inherit the personalities of the people behind them. Kevin : Here’s a fun example. Kyle met a podcaster, Emily Shaw, who has a show called Candy Ears . She experimented with our digital twins, taking voice samples to mimic how we sound. Then she asked our twins questions and recorded their answers. Kyle : We first answered the questions ourselves. Then she played the twins’ responses, and we rated them. Kevin : I rated my twin a 7.5 out of 10. My wife, Heidi, said it sounded just like me and thought it deserved a 9 or 10. She’s lived with me for almost 50 years, so I’ll take her word for it! The question was something broad, like, “What is the meaning of life?” The alignment between my response and my twin’s was striking. Kyle : For me, the text responses were spot on. However, the voice delivery didn’t match my dynamic range—I talk loudly, softly, quickly, and slowly. For someone with a monotone style, the twins are nearly identical. Ross : Voice rendition is a challenge, but we’re on the verge of improving it. Kevin, you mentioned earlier that you use this group of 30 digital twins collectively. How does that work? Kevin : All the individual twins are in a common folder labeled “CE GPT Profile Complete.” When I write an article for LinkedIn, I can query the folder: “Who in the community would have something to say about this?” It pulls relevant quotes and drafts an article, complete with an executive summary and attributions like, “Kyle says this,” or “Cindy Coon says that.” Before publishing, I share the draft for feedback to ensure accuracy. Even if people don’t actively use this technology, engaging with it leaves a residue that makes them better. For instance, I couldn’t spell well growing up, but using spell check gave me immediate feedback and improved my skills. Similarly, interacting with this tech enhances capabilities over time. Ross : So these are custom GPTs fine-tuned with your methodology? Kyle : Yes, that’s correct. They’re private but also available in the GPT store for public interaction as part of our marketing. People can experience what Kyle Shannon or the collective might say on various topics. Kevin : We also host a weekly program called Content Evolution: New World , where people can call in. Sometimes, we feed the transcripts into the GPT profile to generate LinkedIn posts summarizing the discussion. It does a decent job turning an hour-long conversation into a seven-paragraph post. Ross : Kyle, you mentioned the book Collective Intelligence and AI . What’s the process from idea to a finished, shippable product? Kyle : Kevin often says the book reflects a decade of our conversations. We meet weekly, and I’m the CEO of Storyvine, where Kevin is our senior advisor. This collaboration has been ongoing for years. Personally, when I get excited about new technology, I dive in. Large language models initially felt counterintuitive—simple probability calculators, yet producing outputs that felt human. One day, I saw a tweet: “Artificial intelligence is the collective intelligence of humanity.” That hit me. The magic isn’t in how the tool works; it’s in what it’s trained on. I realized it allows us to collaborate with everyone who’s contributed to the internet. I shared this insight with Kevin, and it sparked deeper discussions about collective intelligence—not just in machines but also in our CoLab. The idea evolved, and tools helped us quickly go from concept to an 80% draft. Kevin : After that conversation, I went into the tools, wrote some prompts, and told Kyle, “I just outlined this as a book. What do you think?” He mentioned a tool that could write the whole thing, but I wasn’t interested in going that route. I’m more of a policy person, while Kyle dives into current trends. He also has his community, the AI Salon, which is very popular with lots of opt-ins. We fed our manuscript into Notebook LM. It provided an interesting summary, but it also generated profound insights we hadn’t written. One example was: “The authors are saying it’s like being given access only to the children’s section of the library, without reading the adult books.” That was exactly the point. Much of human knowledge—especially advanced knowledge—is inaccessible because it’s behind firewalls, paywalls, or hasn’t been digitized. We’re only at the beginning of this journey. Ross : That’s such a compelling metaphor—children’s versus adult sections. There’s so much knowledge that remains untapped because it hasn’t been captured or digitized. It’s an important insight. Kyle : Agreed. One of the things we’ve written about is data collaboratives. Creating shared data lakes is crucial for organizations to think about and act on. Ross : What are some examples of data collaboratives you’ve seen or worked with? Kyle : The concept isn’t new—trade associations are a simple example. They bring together organizations with common interests, enabling them to share best practices without crossing legal boundaries. Large consulting firms also facilitate sharing across industries while respecting confidentiality. AI accelerates this process because it doesn’t care about your industry—it can recognize parallels, analogize, and bring insights to bear faster than ever before. It just needs a prompt to get started. Kevin : What amazes me about AI, particularly transformer architecture, is how it can hoover up enormous amounts of data and derive value with enough compute power. My organization has been around for over a decade. If I think about all the knowledge trapped in PowerPoint presentations, sales documents, and more, it’s substantial. We could plop all of it into an AI model and instantly gain insights. Now imagine a Fortune 500 company or a trade association pooling their data. The value trapped in unstructured formats is immense. With just a little organization, they could unlock incredible potential. Kyle : Often, this data sits on individual hard drives, disconnected from the cloud. Gartner predicts that in the next five to seven years, employment agreements will include clauses allowing companies to replicate your work processes and contributions. This will become part of the terms and conditions for employment. Ross : That’s a fascinating point. To wrap up, what’s the generative roadmap for Content Evolution? What’s next for Kevin and Kyle? Kyle : One thing I’m excited about is using the collection of digital twins to explore ideas in unique ways. For instance, if we have a new piece of legislation or an article, we can query the twins for 10 different perspectives—some close to my thinking, others wildly different. We’re now working on a system that allows us to collaborate with people based on how they think and solve problems, rather than just their professional expertise. I can have a brainstorming session with people similar to me or choose those who think completely differently to challenge my ideas. This could even extend to historical figures—where would Aristotle or Steve Jobs sit on that spectrum? That’s what excites me. Kevin : Let me add to that. On Tuesday, Kyle and I had a conversation that ended at 10:55 AM. By noon, Kyle had already prototyped and demoed the idea we discussed. That’s the power of rapid prototyping—there are no bad ideas because you can quickly test them. Another key aspect is transcending limitations like time zones or language barriers. Right now, you can’t always get on someone’s calendar. But with digital twins, people can access our knowledge anytime, in their preferred language, and then decide if they need to speak to us directly. This approach transforms business and how we engage with the world. Our challenge is often being so far ahead of the curve that people initially don’t understand what we’re talking about. That’s part of the innovator’s dilemma. But we’re excited to keep pushing forward. Ross : That’s fantastic. We’ll include links to everything in the show notes. Where can people learn more about what you’re doing? Kyle : Visit contentevolution.net . One of the first tools we built there is the Challenge Engine. You input a business challenge, and instead of giving answers, it generates questions to guide your thinking. You can also find us on the GPT Store by searching for “CE Profiles.” Kevin : For those interested in staying updated on this space, I highly recommend Kyle’s AI Salon. It’s a vibrant community discussing AI and its implications. Kyle, where can people find it? Kyle : The URL is thesalon.ai . We host bi-monthly meetings featuring speakers and discussions. The focus is on exploring what we can do with AI now that it’s accessible to everyone—not just engineers and mathematicians. Ross : Great. Thank you so much for your time and insights, Kevin and Kyle. It’s been wonderful hearing about your work. Kyle : Thank you, Ross. It’s been great to be here. Kevin : Absolutely. Thanks, Ross. The post Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70) appeared first on Amplifying Cognition .…
“To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of ‘futuristic.’ Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels ‘modern.’” – Samar Younes About Samar Younes Samar Younes is a pluridisciplinary hybrid artist and futurist working across art, design, fashion, technology, experiential futures, culture, sustainability and education. She is founder of SAMARITUAL which produces the “Future Ancestors” series, proposing alternative visions for our planet’s next custodians. She has previously worked in senior roles for brands like Coach and Anthropologie and has won numerous awards for her work. LinkedIn: Samar Younes Website: www.samaritual.com University Profile: Samar Younes What you will learn Exploring the intersection of art, AI, and cultural identity Reimagining future aesthetics through artisanal intelligence Blending traditional craftsmanship with digital innovation Challenging Western-centric ideas of “modern” and “futuristic” Using AI to amplify narratives from the Global South Building a sustainable, nature-anchored digital future Embracing imperfection and creativity in the age of AI Episode Resources Silk Road Web3 Metaverse Orientalist AI (Artificial Intelligence) Artisanal Intelligence Dubai Future Forum Neuroaesthetics ChatGPT Runway ML Midjourney Archives of the Future Luma Large Language Model (LLM) Gun Model Transcript Ross Dawson: Samar, it’s awesome to have you on the show. Samar Younes: Thank you so much. Thanks for having me. Ross: So you describe yourself as a plural, disciplinary hybrid, artist, futurist, and creative catalyst. That sounds wonderful. What does that mean? What do you do? Samar: What does that mean? It means that I am many layers of the life that I’ve had. I started my training as an architect and worked as a scenographer and set designer. I’ve always been interested in bringing public art to the masses and fostering social discourse around public art and art in general. I’ve also always been interested in communicating across cultures. Growing up as a child of war in Beirut, among various factions—religious and cultural—it was a diverse city, but it was also a place where knowledge and deep, meaningful discussions were vital to society. Having a mother who was an artist and a father who was a neurologist, I became interested in how the brain and art converge, using art and aesthetics to communicate culture and social change. In my career, I began in brand retail because, at the time, public art narratives and opportunities to create what I wanted were limited. So I used brand experiences—store design, window displays, art installations, and sensory storytelling—as channels to engage people. As the world shifted more towards digital, I led brands visually, aiming to bridge digital and physical sensory frameworks. But as Web3, the metaverse, and other digital realms emerged, I found that while exciting, they lacked the artisanal textures and layers that were important to me. Working across mediums—architecture, fashion, design, food—I saw artificial intelligence as akin to working with one’s hands, very similar to what artisans do. That’s how I got into AI, as a challenge to amplify narratives from the Global South, reclaiming aesthetics from my roots. Ross: Fascinating. I’d love to dig into something specific you mentioned: AI as artisanal. What does that mean in practice if you’re using AI as a tool for creativity? Samar: Often, when people use AI, specifically generative AI with prompts or images, they don’t realize the role of craftsmanship or the knowledge of craft required to create something that resonates. Much digital imagery has a clinical, dystopian aesthetic, often cold and disconnected from nature or biomorphic elements, which are part of the world crafted by hand. To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of “futuristic.” Ancient civilizations, like Egypt’s with the pyramids, exemplify timeless modernity. Similarly, the Global South has always been avant-garde in subversion and disruption, but this gets re-appropriated in Western narratives. Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels “modern.” Ross: Right. AI offers a broad palette, not just in styles from history but also potentially in areas like material science and philosophy. It supports a pluriplinary approach, assisted by the diversity of AI training data. Samar: Exactly. When I think of AI, I see data sets as materials, not just images. If data is a medium, I’m not interested in recreating a Picasso. I see each data set as a material, like paint on a palette—acrylic, oil, charcoal—with the AI system as my brush. Creating something unique requires understanding composition, culture, and global practices, then weaving them together into a new, personal perspective. Ross: One key theme in your work is merging multiple cultural and generational frames using technology. How does technology enable this? Samar: Many AI tools are biased and problematic. When I tried an exercise creating a “Hello Kitty” version in different cultural stereotypes, I found disturbing, inaccurate, or even racist results, especially for Global South or Middle Eastern cultures. To me, cultures are fluid and connected, shaped by historical nomadism rather than nationalistic borders. My concept of the “future ancestor” explores sustainability and intergenerational, transcultural constructs. Cultures have always been fluid and adaptable, but modern consumerism and digital borders often force rigid identity constructs. In prompting AI, I describe culture fluidly, resisting prescribed stereotypes to create atypical, nuanced representations. Ross: Agreed. We’re digital nomads today, traveling and exploring in new ways. But AI training data is often Western-biased, so artists can’t rely on defaults without reinforcing these biases. Samar: The artist’s role is to subvert and hack the system. If you don’t have resources to train your own model, I believe there’s power in collectively hacking existing models by feeding them new, corrective data. The more people create diverse data, the more it influences these systems. Understanding how to manipulate AI systems to your needs helps shape their evolution. Ross: Technology is advancing so quickly, transforming art, expression, and identity. What do you see as the implications of this acceleration? Samar: I see two scenarios: one dystopian, one more constructive. Ideally, technology fosters nurturing, empathetic futures, which requires slower, thoughtful development. The current speed, however, is driven by profit and the extractive aims of industrialization—manipulating human needs for profit or even exploiting people without compensation. This dystopia is evident in algorithmic manipulation and censorship. I wish the acceleration focused on health and well-being rather than extractive technologies. We should prioritize technologies that support work-life balance, health, and sustainable futures over those driven by profit. Ross: Shifting gears, can you share more specifics on tools you use or projects you’re working on? Samar: Sure. I use several tools like Cloud, ChatGPT, Runway ML for animations, and Midjourney for visuals. I have an archive of 50,000+ images I’ve created, nurturing them over time, blending them across tools. Building a unique perspective is key—everyone has a distinct point of view rooted in their cultural and personal experiences. Recent projects include my “Future Ancestor” project and a piece called “Future Custodian,” which I co-wrote with futurist Geraldine Warri. It’s a speculative narrative about a tribe called the “KALEI Tribe,” where fashion serves as a tool of healing and self-expression. Ross: What’s the process behind creating these? Samar: The “KALEI Tribe” is a speculative piece set in 2034, where nomadic survival uses fashion as self-expression and well-being. Fashion is reframed as healing and sustainable, rather than for fast consumption. We explore a future where we co-exist with sentient beings beyond humans. This concept emerged from my archive and AI-created imagery, blending perspectives with Geraldine Warri for Spur Magazine in Japan. I also recently did a food experience project that didn’t directly use AI but engaged with artisanal intelligence. It imagined ancestral foods, blending speculative thinking with our senses, rewilding how we think of food. Ross: That’s brilliant—rewilding ourselves and pushing against domestication. Samar: Exactly. The industrial era pushed repetition and perfection, taming our humanity’s wild, playful side. I hope to use AI to rewild our imaginations, embracing imperfections, chaos, and organic unpredictability. The system’s flaws inspire me, adding a serendipitous quality, much like working with hands-on materials like clay or fabric, where outcomes aren’t perfectly predictable. Ross: Wonderful insights. Where can people find out more about your work? Samar: They can visit my website at summeritual.com, where I share workshops and sessions. I’m also active on Instagram (@samorritual) and LinkedIn. Ross: All links are in the show notes. Thanks for such inspiring, insightful work. Samar: Thank you so much for having me. Hopefully, we’ll meet soon. The post Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69) appeared first on Amplifying Cognition .…
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.
Slut dig til verdens bedste podcast-app for at styre dine yndlings shows online og afspille dem offline på vores Android og iOS apps. Det er gratis og nemt!