Reflecting on Intersections with Knowledge Management, Dave Snowden, and Singapore’s Risk Assessment and Horizon Scanning System

Warning: This post starts out a bit far afield from clinical work. My ideas about how it ultimately connect back, but they're still forming, so this is definitely a "put on your seatbelt" kind of post.

For some time, I have been following the work and blog of Dave Snowden, founder of Cognitive Edge. Snowden is an scientist, theorist, and organizational consultant at the cutting edge of the Knowledge Management (KM) field. Or perhaps it would be more accurate to say that Snowden is a pioneer and visionary who is try to push KM to an entirely different dimension (call it KM 2.0). I must admit that I am still trying to get a handle on Snowden's thinking (it's broader and more complex than I can yet grasp), but one of the most interesting things to me about his work is that he emphasizes narrative (versus purely numerical) approach to "sensemaking." Snowden and others of his ilk argue that you can learn more useful information, detect more weak signals, capture trends earlier through gathering stories than you can by gathering numbers. Stories show emerging trends. Numbers tell you what has already happened.   (For a popular version of this argument see Lori Silverman's provocatively titled book "Wake me up when the data are over: How Organizations Use Stories to Drive Results")

Snowden and another KM guru, Gary Klein, were recently videotaped discussing the methodology (and software) that the Government of Singapore has developed to help them detect terrorist risk, the Risk Assessment and Horizon Scanning (RAHS) system. I found their videotaped discussion fascinating, especially Snowden's critique on the failures of knowledge management (2nd clip on the page). I don't know enough to understand the differences between the perspectives Klein and Snowden offer (and, can't in fact follow all of what either one says), but I listened with great interest to their perspective on how one approaches information-gathering, sensemaking, and decision-making in an uncertain, unpredictable, and unstable environment.

Obviously, clinical sensemaking and decision-making is quite different from government counter-terrorism operations. But I could not help but think of parallels, especially for assessment of suicide risk. Here are a couple of developing (and somewhat random) reflections I had:

  1. We know about statistical risk factors, but how do we do sensemaking with a particular person's set of stories. Clinicians have access to rich narratives, but we generally lack methodologies and technologies for sensemaking that retains complexity and guides decision making.

  2. Traditional documentation (the principal knowledge management system for clinical care), including the diagnostic evaluation reports, usually flatten the richness of stories (by design) into a language that is more technical, linear, and sterile than real life. We usually don't capture stories on their own or track raw data, but rather we move quickly to interpretation and synthesis.

  3. I noted in a previous post that I use mindmapping to teach about suicide risk. In that post, I suggested one benefit might be "it helps to be able to visualize connections between concepts on a map because it makes complex material more accessible." In light of what I'm learning from Snowden and KM, I wonder if mindmapping also facilitates sensemaking from narratives better because it is nonlinear and attempts to replicate connections in human thought patterns.

  4. Apropos of my previous post, Where's the family?...family therapy offers an opportunity for gathering anecdotes from multiple perspectives. Snowden has a KM exercise called "Anecdote Circles," which he uses to help organizations gather information through story. The techniques he uses would be interesting to apply to a family, and to gathering information from family members about suicide risk. This kind of raw data is not available without family members.

  5. Our models and language around risk assessment needs to better reflect how fluid and unstable the phenomena of risk and suicidality really are. The act of suicide is a momentary coalescing of a multitude of snippets and anecdotes and narratives. Reading retrospective case studies of people who died by suicide makes that really clear--all of what we categorize as "risk" comes together in a certain way at a certain point in time. As one of my mentors pointed out to me last week, we can "predict" suicide retrospectively, but it is almost impossible to detect prospectively.  As clinicians we want to be sensitive to the snippets, so that we can scan the horizon (a la RAHS) and sense emerging trends, far before the data ever catches up.


As I warned in the beginning, these thoughts are pretty raw, but I'm interested in exploring this intersection more.