Dave Snowden
Duration: 49 min
Views: 594
8 likes
Published: November 19, 2020

Transcript

[00:00:15] So, if I had to give a title to this talk, um, it would actually come from the front cover of the new Canavin book, which has just come out and is available within Amazon, the digital store. Um, which is a quote from John Seely Brown. Um, and I'm rather unfortunate given the Musk quote in the last presentation that we need to shift from enlightenment to entanglement. Um, and the one thing that we can see if we look at the general attitudes of things like global warming, the political crises in the UK and the US, formulated around populism, um, Brazil as well, and there are three countries which are having issues with that. Um, the assumption of enlightenment rationality and it's just a matter of being reasonable and sitting around a table. Those things have been shattered for the last few decades. The reality is everything is now entangled. And so one of the questions is how do we manage entanglement? And that's one of the things I want to talk about today. Um, the basis of of all of this comes from a general approach which is called naturalizing sense making. That's now recognized as one of the five distinct schools of sense making, the other four are obvious stuff. come from information theory, um, from cybernetics, uh, people like Ashley and so on. Um, we then get Karl Vike, who comes from an American sociological tradition, Brenda Dervin, um, from Library Science. Um, Gary Klein from human cognitive sciences and myself within the naturalizing group. Now, all of these things have overlaps and differences, but the key thing to understand about naturalizing sense making is two things. One is the definition of sense making, which is how do we make sense of the world so that we can act in it. And with that definition comes an issue about sufficiency. So it's not I can never know everything I need to know, but how do I know I know enough to make a decision? So that's actually quite important because we have to make decisions under conditions of increasing uncertainty, those decisions can be highly risky, we can't know everything we need to know. So one key question is, how do we know that we know enough, you know? And I'll come on to that with adaptive decision making later. So that's a very action-orientated approach. Naturalizing comes from philosophy, which means to root philosophy in the natural sciences, not in the social sciences. Now this is a different approach. Um, the vast majority of management methods you see have taken inductive or a case-based approach, you know, so-called empirical. Um, so somebody will go and study 10 or 20 companies there, who they've selected because those companies do something. Yeah. Now they may have rapid delivery, they may have long-term profitability. There's all sorts of criteria they're chosen from. Um, they're then investigated, yeah, a mixture of methods, often interviews, questionnaires, whatever. And from those, factors are identified which appear to be in common. And then you get this wonderful which says, all these 10 companies are successful based on this criterion, they all do X, therefore if you do X, you too will be successful. Now, in real science, that's called the called the confusion correlation with causation. So the fact that I think it's 85% of American international organizations have chief executive officers who play golf. Uh doesn't mean that you should replace management education with golf classes. I mean, I think with many business it would be just as good. Um, but the principle is fundamentally. Yeah. So you haven't got a correlation then on that. The one I really like is that if any country wants to increase the number of Nobel prizes it wins, then all it needs to do is to actually increase dark chocolate consumption. Because the consumption of dark chocolate and Nobel prizes per head of population directly correlate over five decades. And there's actually more data in that than in most management methods. So this danger of correlation and causation is actually quite high. Um, the other danger on it and I'm going to go through three, so one is correlation with causation. The other is something called inattentional blindness. I'll give you the best example on this. So if you give radiologists about a of X-rays and ask them to look for a normal. Now this is something they do all the time. Radiologists on average have 15 to 20 years of experience, they're very well trained, the X-rays are a limited data set. And on the final X-ray, you put a picture of a gorilla in plain sight, which is 48 times the size of the average cancer nodule. Then on average 17% of radiologists will not see the gorilla even though it's physically there. And the 17% who do see the gorilla tend to believe they were wrong when they talked with the others. This is called inattentional blindness and it's not something that you can avoid, it's part of what it is to be human. And the reason for this is that when you make a decision, uh, you basically scan about 4 to 5% of the available data that you could take into account. That's max, most of the time it's less. Interestingly, if your child is a doubles, um, it's it's an object context type difference which we think comes from language. Um, but basically for most people listening to this today, you're going to scan about 3 to 4, maybe 5% of what you see. That will trigger a series of memories both in your brain and in your body, the physical aspect of consciousness. Um, and you blend those memories together and they may be memories of things you did, things that you've read, things that you've been trained on, things that other people told you, it's kind of like messy. And you very quickly blend those together, it's called conceptual blending. gives you an action. So kind of like, and you do this on the basis of the first pattern you find which fits this partial data scan, that's what you apply. So it's the first fit pattern match, not the best fit pattern match. Now in evolutionary terms, you can see why this happens. If you think about the early hominoids on the savannahs of Africa, something large with very sharp teeth runs towards them at high speed. Do you want to holistically scan all available data, look at the catalog of the flora and fauna of the Africanvelt, then identify lions, look at best practice case studies on how to avoid being eaten by them. Now, by that time, the only matter of any use to you will be the book of Jonah from the Old Testament, which is the only document I found which actually claims to be the survival record of somebody previously consumed by a large carnival. Um, so basically our evolutionary history, um, encourages us to do to make a very quick decision based on a partial data scan and our most recent experiences. And that's something which is innate to the way we are, we can't stop it. Now that actually, if you think about it, invalidates system analysis, and it also invalidates most of the interview-based approaches to understanding what's going on in an organization. And that's made actually worse by two factors. One is the way people remember things changes after the event. I have a whole series of experiments on this in IBM, in that we actually got people to do a lessons learned process the day before they they knew whether they'd won a major sale or not. And then we did the identical lessons learned process, no variation, the day after they knew whether they'd won or lost. And that very simple switch between we were successful, we were failure, completely changed what people recalled from their history. So, when we do lessons learning, which is something we do a lot of work on, we try and capture people's lessons learned as they happen, we don't try and do it retrospectively. But of course, the case-based approach to management involves retrospective coherence. It may also involve the wrong people. I mean, I remember when I was in knowledge management, again in IBM days, we had all sorts of claims from companies which were very successful, and that was generally from people in the management group. When we did field ethnography with the people doing the work in the field, we very rarely found a correspondence between what was happening and what middle and senior management said was happening. It was it was actually an unreliable record. So you've got all this sort of complex mixture, this is my second reason of how people recall things, what they recall, what they pay attention to, which means you're very likely to miss things unless you're in a stable position. That's kind of like the third reason. Uh, during a period of extreme stability, then things are likely to repeat. But during a period of extreme instability, then actually repetition is very unlikely, and drawing what you do based on what worked before is actually a fundamental mistake. So by taking a natural science approach, looking what's happening in ecosystems in terms of knowledge of human consciousness, um, in terms of embodied material, we can actually create methods and tools consistent with science which has been replicated. Now, I'm going to talk through some of those and I'm going to lead that into the work we're currently doing, which will be published shortly with the European Union on how to manage under conditions of chaos and complexity. And I'll also talk about the work we're doing on large-scale human what's called human terrain mapping and to try and understand unstated attitudes and beliefs. So I'm going to talk about some real-world activity at this point. Yeah. Um, but to complete the process I was saying, one of the things to understand about change is it the opportunity for change comes cyclically. So if I take an idea, you know, take a couple of cases from the IT industry, uh, when I first started to work in computing, it was during the days where IBM was totally dominant. There was a saying then, nobody gets fired for buying an IBM. I remember one of the first times I came to Paris and came excited about some bull technology. Um, my executive said, well, forget it, we're not going to sell that stuff because it's quite unusual, we know that it performs better than IBM, but nobody gets fired for buying an IBM, it's a safe way to go. And the reason IBM was in that position is they were the first mover into the early field of computing. And one of the main reasons why they were the most successful first mover is they did something called radical repurposing. They were at that time the world's experts in punch card control mechanisms for machines. So they took that expertise and they repurposed it to the early computer languages. And I'm of the generation that still remembers walking into a university computer room, a stack of cards and getting a compile error on card three, you reload the deck to actually get things right first time. So that sort of control was there. What then started to happen is that hardware, which IBM had based its basically supremacy on, yeah. Um, started to become a commodity. It started to become something that's, yeah, it was actually not something, yeah, the quality of everybody was more or less the same. So people started to compete on on cost, yeah. Now, IBM as the apex predator, dominant predator in the ecosystem, was actually immunized for a long period of time, so they didn't get the signals early enough. And by the time they got the signals, it was kind of like too late. Um, the world had turned from hardware given a competitive advantage, hardware been a commodity, and now competitive advantage was in software. And that was when we get the period of Microsoft dominance, and remember that IBM had actually funded the start of Microsoft and handed over the IP, because they didn't think software was significant. This is what Clayton Christensen called competence-induced failure. IBM didn't fail because they were incompetent, they failed because they were too competent in a paradigm which had shifted into commoditization. And of course, that happens in turn to Microsoft. They don't realize that, you know, software has become a commodity, you get Linux, you get open source, you get other entrance. Microsoft again is protected for a very long period of time, then all of a sudden you get this catastrophic shift and Microsoft effectively gets displaced. In different markets, in personal computing and that sort of area by Apple. Um, and kind of like everything changes, and interestingly, Microsoft repurposed software they developed for IBM. Apple actually succeeded by repurposing next software, which next had gone bankrupt with Jobs. So we get these flex points in which commoditization leads to complacency. That means the energy cost of something new entering the field goes down. We then get something small and new which enters quickly, and it does it by repurposing an existing capability, rather than by entering a novel one, and then it in turn becomes complacent and the whole cycle continues. You can see the same thing in politics by the way, if you look across the whole of Europe and North America, what neoliberalism has done for the past 40 or 40 or 50 years. Is to homogenize politics, so nobody sees any real difference between the main political parties. That means the energy cost of extremism goes down. And you can see that in the history of ideas. I can go back to the Weimar Republic, I can go back to talk about how Julius Caesar uses populism and the mass to capture political control in Rome. This is a pattern which has existed throughout history. So, one of the things, the consequences of that is that time when things can change and there's a time when they can't. Effectively, you're either an apex predator or you're a hyena, you know, there are different strategies you can adopt. Now we're currently in a time of extreme change. Uh, and we don't know what's going to come out of it. Um, we're actually looking at the moment at a program, I'm happy to talk about this later if anybody's interested, um, which is designed to increase network connectivity to try and reduce the impact of what we think is going to be a mental health epidemic, um, which is already, we're saying in the early signs now, but it's going to magnify in January and February. When people realize that COVID isn't going to be over for about another year. I mean, that's an optimistic. Um, they may also start to realize that this is not the first disease we're seeing in my lifetime, I'm 66, there are worse things coming. And of course, then we've got the devastating consequences of climate change, where I used to worry about my grandchildren, worried about my children, now I worry about myself. So we're entering a period which is going to have what's called fur turbulence, so turbulence at different levels at different times. And that is both threat and opportunity. And if we're going to handle that, we've got to do it with radically different tools and methods. So, that leads me on to talking about some of those and going through some of the stuff we're doing. So, some time ago, um, I created a thing called the framework, which is in its 21st year. Um, we've published the book, or at least they published it, I didn't know it was being published, I got it by surprise one day. Um, I say that out, but the framework works by identifying different types of system orders, system complex systems or chaotic systems. And one of the key things Canavin argues is that methods and tools that work in one domain don't work in another, and some methods are really good at arranging transition between domains. So if I take Scrum, for example, within the IT industry, um, which is also celebrated, I think today or yesterday, its 25th anniversary. Um, Scrum is brilliant at making complex things complicated so that they scale. Um, it's what we call in Canavin a liminal technique. It's actually not good at the truly complex, the truly ambiguous, but it's a brilliant transition technique. So what it does is allow us to say, well, we're in this type of system, we can use these type of tools, we need to move into that type of system, we can use those sort of tools, so it's a way of assembling different things that work in different ways. And it's also a way of recognizing that one size doesn't fit all, and you need to be what's called ontologically aware, to be aware of the type of system you're in, before you select the methods. Yeah. And my objection to a lot of the things you see in the field, whether it's business process re-engineering, um, six sigma, um, Blue Ocean strategy, learning organization, agile. I've been involved in all of these over the years, is that it's not that any of them are wrong, but they're right within boundaries, and they attempt to claim universality. And it's that attempt to claim universality which causes the real problem, because it means that they can't be utilized within context and they start to fail when they're used outside the context for which they were created. Process methods, for example, work very well in manufacturing, which is a closed system, but they work very badly in service, which is an open system. In fact, I make a general question for those of you involved in software development, is we need to start to break the manufacturing metaphor that we use for software, yeah, we're not producing product. Um, we're actually using technology to co-evolve with humans to produce systems which support decision making. I mean, that's really what it's about. But that's a side bar, I can come back to that later if you want. So, that concept is key, and what it gave rise to a series of different approaches. So the essence of the European Union work on crisis management, and the design center in Europe adopted Canavin some time ago and created a whole set of tools and methods around, somewhat ironically, given the stupidity of the English, we're blaming the English for Brexit, you know. Blame the English for everything. But this time it's justified. Um, but from that, when the COVID COVID crisis hit, we got together very quickly, we're currently producing a joint production between my Canavin center, which is the public good aspects of cognitive edge and the design center in Europe, which will be out shortly. Now, what it does is it looks at how you manage a crisis. And the first and most fundamental move, and I'm going to go through probably five moves and talk about something you don't do. So the first and most fundamental move is you have to stabilize the situation quickly. Um, you're in chaos. Things are random, what the hell's going on, you're in a completely novel solution. Um, even if you thought it happened, you kind of like didn't believe it would happen. Covid crisis hit, we got together very quickly. And we're currently producing a joint production between Mike and Evan Center, which is the public good aspects of cognitive edge and the design center in Europe, which will be out published shortly. Now, what this does is it looks at how you manage a crisis. And the first and most fundamental move and I'm going to go through probably five moves and talk about something you don't do. So the first and most fundamental move is you have to stabilize the situation quickly. Um you're in chaos. Things are run. What the hell's going on? You're in a completely novel solution. Um even if you thought it would happen, you kind of like didn't believe it would happen. Um so it's not that we didn't know for a fact a pandemic had been extensively gained and prepared for. But nobody really expected it to happen on their watch. So when it came it was a surprise. Now, we then see very different reactions. So if we take New Zealand, which is success in this respect, is they imposed a radical draconian lockdown very early on. Now, when they did it, we didn't know that that was necessarily the right thing to do. But the point is a radical imposition of lockdown kept more options open than doing what Britain and most European countries did, which is wait to see if they really needed to do it or not. So by that time, they had far more limited options and the recovery has been far slower. Yeah. So kind of like your first thing is you have to create some sort of, you have to get yourself breathing space, you have to get yourself breathing room. And that's normally about creating some type of constraint or some type of intervention. Once you've done that, um what you've actually got you've got yourself into a position where you can now look at the problem from different aspects. So the first question you ask is are there a bunch of experts around here who've been telling us about this situation for years and we've been ignoring them? And COVID, that was fairly evident the epidemiology. Um epidemiologists have been talking about a plague for a long time. Um now they're relevant. Now, I want to emphasize this from a politician's point of view. I don't think it necessarily wrong not to have put more attention to it up front. If you work at decision levels within large companies or governments, the number of things which might happen far exceeds the capacity of the system to anticipate all of them. You you need some general resilience and I'll talk a bit about that in a minute or two. But fundamentally, they then have to take attention to it. So you call the epidemiologists in you say, sorry guys, we know you were right, what do you need, what should we do and you authorize to act. That's a switch into what we call the. Yeah. You've got high structure. These guys have got models, they've got this is they know what to do, they now know to close that. So you let them do that. Your next shift is to identify areas where it's not clear what expertise you should use. So for example, in Britain, there was a massive conflict between epidemiologists on the one hand and behavioral scientists on the other. Um and there was sort of an emphasis on the behavioral side, given the people we got within that unit in Downing Street, on herd immunity. which any able epidemiologist will tell you was potentially disastrous, yeah, particularly with something like Covid, which seems to attack genetic weaknesses, it's it's not going to work. Um but there was that conflict. So if I'm a politician, I've got two groups of highly credible people, none of whom disagree agree with each other. And that's where we we have to do separate methods and this is where I've done a lot of work over the last 30 or 40 years. Which is how do you ritualize the conflict so it's not personal and so a decision maker can make a decision. is we use a lot there, which we're going to publish shortly, um I think in about four or five weeks time, is what's called a triopticum. Which means you get three experts together within a confined space and they each present their theory. The other two respond. There's no dialogue at that stage. Yeah. Um surrounding those three experts, you then have 20 or 30 people who also have expertise within the area. Um and you formulate them into groups of three, yeah, who actually represent the three main actors. And they go away and have discussion and then one person from each group sits in a circle and they discuss what they heard. And then you repeat three times. And each time you rotate the spokesman and you change the response. At the end of that, which can be running half a day or over two or three days, depending on the urgency, you've actually looked at the problem from completely different perspective. And then you mix the groups, yeah, so if you've got seven groups of three, um they become three groups of seven. Um to actually produce a synthesis and a series of of recommendations. Now, there are other techniques as well, but that's one. The key point is it's highly ritualized, you're not bringing people into a room to have a discussion because then the most powerful voice will actually win out. What you're doing is forcing people to listen and only giving them limited time slots to make decisions. We have other techniques that ritual dissent and trios, I just gave the triopticum as one example. And that's been used now for five or six years, we've used it in peace and conflict. We've also used it as a discovery mechanism. So there's stuff where you have to ritualize the experts, so that's kind of like move number three. Move number one, get yeah, get together in a way you create some sort of stability. Rule number two, give to the experts what the experts can do. Move number three, identify conflict, ritualize the conflict, decide what you can do again with the principle of keeping your options open.
[00:26:00] You then move into the complex area and one of the ways that we define complexity is you have multiple competing hypotheses about what you should do. And you can't resolve which of the hypotheses is right on an evidence-based within the time frame for decision making. And any executive will tell you they meet that all the time. So instead of trying to resolve it, what you do is you identify which of the hypotheses about what we should do are coherent. Yeah. Um coherence is an interesting concept. But I'll give you my favorite illustration. I know that most evolutionary theory is wrong, it's coherent to the facts. We keep discovering new things. Whereas on the other hand, I know that young Earth creationism is completely coherent to the facts, so it's not worth pursuing. So the test for coherence is really important for moving in certainty. Because coherent pathways should be explored and incoherent pathways will be a waste of energy. So for every coherent hypothesis or potential pathway, instead of trying to resolve it, I effectively give a small amount of resource to run a rapid cycle safe to fail experiment, but critically, I run those experiments in parallel, not in sequence. Because in a complex everything you do will change what's happening anyway. It's like Heisenberg's uncertainty principle. So by running multiple small parallel safe to fail experiments, it means the system itself becomes visible what I should actually do. So that's kind of like move number four.
[00:27:39] Move number five, deals with high levels of uncertainty. And this is where you're not sure you've covered all the hypotheses. I'll come back to my radiologist, you've got a funny feeling that there's something out there you're not paying attention to, which is significant. And this is where we do a thing called mass sense. And the way this works is based on the whole idea behind what is properly known as wisdom of crowds. So my two favorite examples from that field are the famous one, which everybody knows, which if you get farmers to guess the weight of a cow or a steer, if you're American, at a county fair, then the average of all the farmers' guesses is better than the guess of the best individual. Now, there are three things that make that work, and it's important to pay attention to them. The first is they must be farmers, they need deep tacit knowledge. I mean, any of the big consultancy firms will sell you a team to guess the weight of every mammal you want, and they'll charge you a fortune, but it's not as good as a bunch of farmers. So you need to have that tacit knowledge. Secondly, none of the farmers must know what the other farmers have guessed, so everybody's guess must be independent of every other. This is actually technically, it's a try for chaotic system, because you've got complete randomness for lack of connectivity. And then the third condition is there must be anything major at stake for them. It needs to be like a game or something they do in neutral. Because then they'll make decisions in a different way. Um if anybody's read Enders Game, which is the book, ignore the film, the film is terrible, you'll know that the reason that the kids were able to defeat the aliens is because they thought they were playing a game. They didn't think it was for real, therefore they would make sacrifices that no human being would normally make. That's the very clever thing in the book, which is completely missed in the film.
[00:29:37] But the same principle is there, you want that independence. The other example I like is a US submarine that grounded off the coast of Portugal in the 1960s. Nobody knew where it was. And so what they actually did was to give partial data, that's important, to different groups of experts around the world, including a Nova Scotia fisherman who kind of like know the Atlantic currents. And they got all of them to estimate where the submarine was. And none of them were remotely near it, but a probability distribution of all the guesses was 600 meters away from the actual submarine. Now, there are sound statistical and cognitive science reasons why this works, but we've done a lot of work on this in the years. Including distributed decision making between intelligence agency in the context of a counter terrorism and that sort of area. That's the sort of problem we started this work in. And what we do is we present a situation to maybe a thousand employees, a thousand respondents, and we get them all to interpret the situation, so they add text or they add a picture or they add voice or some combination. Whatever they feel comfortable with, which describes what they think is going on, that's called situational assessment. And then they create a micro scenario about what they think will happen next. Yeah. Um and all of those are done independently of everybody else. Yeah, they're done in isolation. And then they interpret what they've done into what's called high abstraction metadata. Now, I need to describe this and I'll do it in the context of a staff satisfaction survey and this is stuff that we we developed over the years and patent it. So basically, if you've if anybody's done employee satisfaction survey, this is the easiest way of explaining it. You get a question which says, does your manager consult you on a regular basis, scale of zero, not at all, 10 all the time. And you know exactly what answer they want. And this by the way is also the problem with things like opinion polls because people answer in role, they don't answer what they really think. They say what they think the interviewer wants them to hear or what they think confirms their opinion themselves, they don't actually report what they're going to do. Certainly not when they're going to vote for somebody they they know other people don't approve of. So either way, so that's the go back to the employee satisfaction survey, you know, if you're in a good mood, you give it 10, if you're in a bad mood, you give it zero, you statistically to make it eight or two, because you know they're going to eliminate outliers, but you gain the system. And if you look at the evidence on this, you'll find heavy skews towards middle and year ends when people are getting like out of service. So we take a different approach. First of all, we don't want a hypothesis. The hypothesis here is the managers should consult you. And I remember when I was in IBM and I asked the head of HR, how the hell am I meant to answer this? Because I've got managers, sometimes they consult, sometimes they don't, sometimes they shouldn't. And she said, average your experience over the year and stop causing trouble. I think to say to me. this encourages me. Um and we don't do that, right? Because you've got a hypothesis and you've got a gameable response. So we do a different approach, we say, yeah, we'll go maybe 10% of the workforce every month and say what story would you tell your best friend if they were off the job in the workplace? And then we ask people to interpret that story and we normally give them six triangles, and one of the triangles will say in this story, the manager's behavior was and then the labels are altruistic, assertive, analytical, three positive qualities. Now, there's a lot of science behind this. Yeah. Um because what we're actually doing is we're creating a cognitive load because the human respondent doesn't know what answer is expected of them. So that switches them from what's called autonomic to novelty receptive processing, or if you want the popular phrase for that from thinking fast to thinking slow, so we go deeper. And so with six triangles, I've got 18 data points, so what I've got is a quant data set. And so with a thousand respondents, I can take that quantitative data and create some quite interesting maps. So this is for hope it's worked, it worked on test.
[00:34:12] Okay, so this is an example of one of the outputs of this sort of map. It's called a fitness landscape, yeah, which is deprived from Stu Kaufman's work on evolutionary biology. So if I've had a thousand people respond, then those different contour lines, those different patterns represent different groups. Um so for example, I was talking with a Canadian province earlier on where the program. So green might represent Anglophone speakers, purple might be francophone, the blue might be indigenous, and the orange might be government officials. And it's actually quite common to see given the same data and the same interpretative structure, people see things very differently. Now, we use these maps for a variety of purposes. One is to identify outlier groups. So remember the 17% who've seen a gorilla? Well, that might be the group they've built Alpha, yeah, they're separated from the rest, they're not in any way homogeneous with it. So they may have seen something other people don't aren't paying attention to. So from a senior executive's point of view, the ability to pulse your whole workforce and to find people in the workforce who are seeing the situation differently and then be able to call them in and talk with them is a key aspect to making your organization more resilient, because otherwise that stuff will be filtered out. And it's a basic fact that if your attention is brought to an anomaly, you'll pay attention to it. The if you're the data, you won't work out the anomaly there for itself. The other thing that this is also used on, so for example, if this was a map of culture, um and we use these maps for cultural mapping a lot, and maybe my desired culture is towards the top right hand side of the framework. Um then I've got those blue guys in Alpha, they're kind of like a long way away from where I want them to be in Omega. So instead of launching an idealistic future-centered program to try and shift them into that space, I identify what's called an adjacent possible, that's the Beta group.
[00:36:17] I then click on that Beta area and I can see the stories that my employees told, and I can say what can we do tomorrow to create more stories like these? And then I can say, I look at the area, look at the stories and say fewer stories like those. As I move towards the Beta, I can move to Gamma and then to Omega. Effectively, I go through a series of stepping stones, yeah, before I move on. And that's kind of like key on this sort of work. Um because what we're doing is we're for people, yeah. Um a complex landscape and the landscape metaphor works for individuals. And actually interesting, the density of the contours indicates the strength of the feeling and if you have got broadly spread clusters, it will be easy to change if they're tightly controlled, they won't be. So we use that for identifying outliers, use it for for doing change mechanisms. So that's the key approach. Yeah. And it's all based on this fundamental principle of of complexity or more specifically, complex adaptive, the study of complexity in human beings. Um which says what matters is where you are and your direction of travel, not where you want to go. Um The key difference between a complex system and an ordered system is an ordered system is linear relationships between cause and effect, so you can predict the future and you can take an engineering approach. Whereas in a complex system, you have no linear relationship between cause and effect. The system is dispositional, not causal. I've showed you as a dispositional map.
[00:37:57] So in an ordered system, you can set goals and chart a pathway and have outcome-based KPIs. In a complex system, you have to measure where you are and start a journey with a set of directions, sense of direction, but be prepared to change it. And you have what are called vector-based KPIs rather than outcome-based KPIs. And a vector KPI is direction and speed of travel for intensity of effort. The other thing that those dispositional landscapes do is to show you the energy cost of change. And generally, evolution tends towards the lowest energy level. So if the cost of the change you want is a very high energy gradient, it ain't going to happen. You have to lower the energy gradient first. So this is all about changing the ecosystem so that the things you want are more likely. And I'm basically arguing for a shift. For the last 30 or 40 years, starting with cybernetics and systems dynamics, we've gone heavily into an engineering metaphor, a firm, a mechanical metaphor, a firm. We now need to switch out of that into an ecological metaphor, which is far more complex and intangled.
[00:39:08] So I've taken you some principles there. Um those maps we're talking about, we're currently using and there's a big initiative which is on our website on climate change at the moment. Because one of the things we want to do on climate change is measure and until you get a dispositional change in the wider population, big initiatives like Paris are unlikely to be successful. Um and so we started a project some time ago, which captured what we call small notices, so the little things that people were aware of, which were making a difference because if we get enough of those little things together, we shift the energy gradient, so people are prepared to make what Engleton calls radical sacrifices. Because in order to arrest climate change, we are going to have to make some very rough sacrifices in our collective and personal lifestyles. And at the moment, the electoral system doesn't support that because it's short-term. Whereas these decisions are long-term. So what we're looking to do is to measure the dispositional state, yeah, and identify when it's ready to change. I'll come back to that concept of predation curves, IBM to Microsoft, the system has to be ready to change in order for that to happen, and we think that's starting to come together. Um the other thing we're also starting and so we'll launch this now to do what's called mass sense, which is to proper propositions up and get people to generate scenarios and that will be a worldwide program and anybody listening to this is more than welcome to join and anybody who joins will get access to the results. Yeah. So that's kind of like the key activity. But the other thing we're going to do there is to look at other issues like Black Lives Matter and COVID. So we've already done very successful projects in Scandinavia and in Eastern Europe on understanding the impact of COVID on people's lives. If anybody wants to see those, if anybody wants to replicate them, we'll happily make that available to people. Um, what we're looking at, because we can get these micro narratives, this low-level data, we can look at where there are overlaps and where things are in common.
[00:41:18] So we can actually capture data through journaling of people's day-to-day lives. And we've done pioneering work getting children to act as ethnographers to their own communities, or young employees to act as ethnographers to more experienced employees, so that we can actually map those states. And we can, instead of saying what do you think about subject X, we put those subjects into the triangles the way people signify. My best illustration of that is to talk about work we've done on teenage suicide, which is a major problem worldwide at the moment, particularly in impoverished or indigenous communities. So people don't live their lives. There isn't a single causal factor. There are no root causes in the complex adaptive system. So you can't say if I do X, it will produce Y, it won't. So reducing bullying is a good idea, but it won't necessarily get rid of suicide. Yeah, reducing childhood obesity is a good idea, but it won't necessarily get rid of suicide. Yeah, reducing poverty is a good idea, but it won't necessarily get rid of suicide and so on. So what we do is we get the kids to journal their experiences and we put the assumptions about obesity, bullying, impoverishment into the triangles, the signifiers, and we see where they occur naturally in people's stories rather than auto-suggesting it to the respondents.
[00:42:37] So that's the approach we do, these underlying patterns. And we currently got a major piece in reconciliation bending at the moment, which I'm waiting to hear about, um which would involve 500 workshops and all schools and all sports clubs within a nation. Then what we'll look is at the day-to-day stories which we can capture on mass for very low cost because of this self-certification process or apps. And then we'll look at those fitness landscapes like the one I just showed you, but we'll look at them for different demographic groups, for different age groups, for different political groups and we'll identify where there are overlaps and where there are differences. And where there are overlaps, we'll start to increase connectivity so that we get more empathy at that point and we can reduce the energy cost of people working together rather than being in conflict. Now, that's a very big subject and I would normally lecture on that for a day, I'm just giving you a hint of the approach. Is you go down to a much lower level of granularity from stated opinions to underlying patterns that believe in underlying patterns of human interaction. You identify what's in common, then you start to take action oriented to results to actually create a difference. Um and that's actually as important for merger and acquisition work in companies, that's important for cultural and all those other things. It all comes together.
[00:44:00] So one final thing I want to talk about and then we'll open it up for questions. I hate speaking where I can't see people. It's I've no idea whether it's working or not. Um, one of the things is how do you create a resilient organization? Now, I want to make something clear. I I define robustness as surviving unchanged. And resilience as surviving with continuity of identity over time, I surviving by changing the metaphor I use. Is that a robust system is like a sea wall. You know, it's nice and big and rigid, it's fixed, I can drain the soil on the landward side, I can plant that soil, I can make it productive, it's all good news. I can walk along the top on a winter's day, the storms. Until the day its design conditions are exceeded, and at that point it breaks catastrophically, and we'd be better off if the water if the wall hadn't been there in the first place. So robust systems are very powerful, but when they break, the consequences are catastrophic. An alternative is a salt marsh, which is an area of land that I let flood. I don't have walls or dikes. And that land area is, you know, it's like mangrove, mangrove swaps are the same thing. These things are constantly mutating and changing and the Camargue in France is a great example of that. They're not necessarily fertile in the sense of agriculture, they have rich ecosystems. But they absorb a huge amount of flood water, and critically, when they release capacity, they don't release that capacity. There's no cataclysmic shift. And so what Taleb calls antifragile, and I should in declaration, you know, one of the many people have been blocked by Taleb for daring to disagree with him. What Taleb calls antifragile to me is a type of resilient systems that survive through failure, one type of resilient system, but not the only one, and they're one of the most difficult to put into place. Yeah, there are other ways to do that.
[00:45:56] So if I really want to create resilience in an organization, one of the things I need to do is to manage the informal network as much as the formal system. Now, a metaphor for this that I want to use, if you look at a sophisticated ecosystem, again, the ecosystem metaphor, you can see plants and plants have root systems which gather nutrients and water water. But if you look at it through a microscope, you'll see that all of the roots are connected by a type of fungus. And that fungus is in a symbiotic relationship with the plant. It actually connects the plants and we're not quite sure what the effect of that is yet. But the fungus spreads it has 200 times the reach of the plant roots. So the fungus actually feeds water and nutrients to the plant in sucrose in return, it's a symbiosis. Now, the fungus is highly entangle, it's actually difficult to manage, you just can't control it, but it's essential to the health of the plant.
[00:46:57] The equivalent in a modern organization is the informal networks. And the work I originally did on this in IBM, I identified that the ratio between formal and informal was one to 60. Yeah. So for every 60 informal networks, there was one formal one. So we developed a whole array of techniques like social network systems, like social and stimulation, like trios, which would generate the informal networks so that everybody was within two degrees of separation of everybody else. Because if after two phone calls, I can find somebody who is trusted, who can actually spread knowledge and decisions, the organization as a whole is resilient. And if you have a dense informal network which goes across functions, you don't need to worry about silos because the informal network will manage it for you. But it's like the fungus, it's invisible, it's not explicit, but it's critical to health. So building informal networks both within and without the organization and managing those so that they basically people would want to work together, together so they've got something to reference back, that's a key aspect to create an organizational resilience. The other is team-based control. So if you're looking at a military environment, you'll find that although you have generals and colonels, fundamentally the fundamental unit is called a crew, in which people roles are defined in role interaction. And the critical thing about crews is authority can be delegated. So in a weapon a weapon sergeant in the army actually has authority over a general in respect to that weapon. There's there's no problem with that because it's the role to play. And although you always got a pilot, who is the pilot can change. So another way you create resilience is to create effectively crew-based leadership functions, in which different roles interact, but the people occupying the roles become less important.
[00:48:58] So, what I've tried to do is to give you some of the background theory on this. I've talked about the fundamental six step approach. The six step, which I didn't mention, never ever apply best practice. That's a negative one in crisis management. I've talked about the work we're doing on climate change and this concept of mapping underlying attitudes to see what's in similar and different and vector theory of change. And I've talked about resilience by generating informal networks.
[00:49:27] So, hopefully that was different from things you've heard before, hopefully it was useful. Hopefully I didn't speak too quickly, I'm afraid in Wales we tend to speak English faster than the English do. Um, I hope it was useful. More than open to questions or arguments or challenges.