Conversation with Paul Quinnett, Founder/CEO of QPR

I talked yesterday with Paul Quinnett, Ph.D. Founder and CEO of the QPR Institute. He has been working in the field of suicide prevention for decades and has developed an excellent set of tools for clinicians. I enjoyed the conversation because Dr. Quinnett is bright, experienced, and passionate about his work, and also because of the conceptual overlaps I observed through our conversation. Here are a few from my notes:

Technology Transfer. Dr. Quinnett’s interest is technology transfer, i.e. taking what is known from the literature and clinical experience giving it legs for the working clinician and healthcare system. This the primary thrust of my evolving work, as well. I also have an interest finding the most efficient and effective pedagogical method for transferring information.  This is where my interest in mapping and other forms of visual representations comes in (see my previous mapping posts). This topic is also part of what has interested me when I heard Wendi Cross speak (see my post reflecting on Organizational factors that support care of suicidal person).


Family involvement. I’ve posted several times (see Where’s the Family?, and At the crossroads of family therapy and suicide prevention) about the conundrum that family involvement presents for suicide risk assessment: we don’t have good models for talking about suicide with family members present, we don’t have clear ideas about how to incorporate families in the assessment process, AND in many cases it is impossible to imagine performing a worthwhile assessment and management plan without family input.  Dr. Quinnett has been working on this very issue from two interesting perspectives. The first is what he called “the cost of data collection.” That is, he is curious about how clinicians perceive the cost of collecting information from 3rd parties. The second is that he is working on developing a protocol of the key questions and info one should ask/gather from family members to guide clinicians in their interviews. Dr. Quinnett has been working on this with Sergio Perez Barrero, MD, a psychiatrist in Cuba who founded the Suicidology Section of World Psychiatry Association and also the World Suicidology Net.Dr. Perez Barrero is a QPR trainer, who has translated the materials in to Spanish.


Drawing on experience in other fields that do risk assessment.  In a previous post, (Reflecting on Intersections with Knowledge Management, Dave Snowden, and Singapore’s Risk Assessment and Horizon Scanning System), I shared my reactions to Dave Snowden’s work on detecting terrorist threats. Dr. Quinnett was struck in a similar way by Gavin deBecker’s work in threat assessment. I had not heard of deBecker but apparently his California firm, Gavin deBecker and Associates works with high-profile clients (including Hollywood celebrities) to analyzing potential threats to their safety. He has written a book called “The Gift of Fear,” which I plan to read on Dr. Quinnett’s recommendation.

Along similar lines, I have consulted with a forensic psychologist and friend, Daniel Murrie, Ph.D., who co-authored a book (with Mary Alice Conroy) coming out this fall about assessment of risk for violence, “Forensic Assessment of Violence Risk: A Guide for Risk Assessment and Risk Management.” This book, which I’ve seen excerpts of, presents an approach to assessment of risk for violence that is clear and accessible to clinicians and retains the richness and clinical complexity that appropriate to the challenging work of predicting an individual’s risk of being violent. The approach that Conroy and Murrie take has potential applicability for suicide risk assessment, for which we’ve never quite had such a clear model for conducting and writing assessments.

I guess the intersection here relates to seeing potential for developments in threat and violence prediction work to help our efforts to improve detection of suicide risk.

Desire to understand the clinician’s state of mind when faced with risk assessment. I have noted before (see my post on Visual maps and guides in high stress situations) that I’m interested in learning what the cognitive science would be related to how people best access information for decision making in high arousal situations. Similarly, Dr. Quinnett mentioned that he would like to test clinician perceptions about information gathering in risk assessment. What kind of cost/benefit appraisals do they make about asking questions and gathering collateral info?

In my view, the clinician’s state of mind/emotion and cognitive heuristics are underappreciated in most approaches to training about suicide risk. As I noted in my post about clinician anxiety (Clinician anxiety–what’s it about?), what we believe about the most pressing concerns for clinicians will influence what and how we teach. Likewise, understanding how clinicians learn best is important for modes of dissemination (for example, see my post on How clinicians learn: Web 2.0 Opportunities?).

Summary: “Needs Development.” This is another post I’ll tag “needs development” because much of this raises more questions than it answers.   But reflecting on these conceptual intersections helps me to see how much is not known about how to approach training in suicide risk assessment.   Really, there is a “basic science” set of questions about learning and the clinician mind that gets skipped over when we do the necessary and important work of evaluating educational interventions (which, of course, we don’t do enough of either!).

Example of risk map

In a comment on my previous post about visual presentation for clinical training in risk assessment, Avi of GUI Yourself requested an example. Here is a .pdf of a map I use. The details are collapsed, but you can get the idea.  I also teach using a map of the options available to clinicians in our system.  I am working to customize that map for each service area I train (with the aim of influencing implementation and transfer, as discussed in this post).

Treatment teams as "Communities of Practice"


Still thinking about the intersection of clinical practice, risk assessment, knowledge management (KM), and Dave Snowden, which I blogged about yesterday.


In KM world, what mental health clinicians call a "treatment team" could be considered a Community of Practice. There are many definitions of this term and treatment teams fit some more than others. But Dave Snowden is clear in the videotaped discussion I pointed to yesterday that one of the failures of contemporary knowledge management is the inability to promote fruitful communities of practice. Snowden argues that organizations make the mistake of trying to organize communities of learning and practice using language, structures, and concepts that are not "naturalistic." That is, we ignore the processes by which people naturally come together to form knowledge-sharing communities and either over-organize (imposing a hierarchy and structure that we think will promote good functioning, but ultimately stifle innovation) or wrongly organize (bringing people together around a concept or structure that does not promote natural affinity).


Our organization is considering a redesign of our ambulatory service into diagnostic-based treatment teams (e.g. "Comorbid depression team"). The aim is to have well-functioning teams that promote evidenced-based practice. As I listened to Dave Snowden talk about communities of practice, I coudn't help but think that organizing this way has hints of the kinds of non-naturalistic grouping that Snowden warns against. The intent is good and the organizational principle makes sense on the surface, but grouping clinicans by the DSM diagnosis of their patients has the potential to be structure-rich, story-poor, and human-factor-ignoring.

One of the principal reasons stated for considering organizing by diagnosis is that the research literature about effective treatments is organized by diagnosis. It is an "evidence-based" decision. However, evidence associated with a particular epistemology (categorical psychopathology) is privileged over that of other epistemologies like cognitive science, organizational behavior, human factors research, and systems theory are ignored. Just because treatment studies organize patients into diagnostic groups, doesn't mean that human clinicians will work most effectively with the human problems and stories we see by grouping ourselves by our patients' Axis I diagnosis.

What would be a more narrative-rich way or organizing ourselves? Well, if we think of theoretical paradigm as a narrative, than perhaps that could be starting point. Or perhaps provide freedom for people to organize themselves into natural groupings. Or maybe there's a way of listening to clinicians and patients' stories about themselves and seeing trends and themes that we don't now see. It would take time and a new set of methods, those more akin to what Snowden promotes, to discover these themes. But we're taking time and energy either way. I'm not sure, to be honest, but I think the principle Snowden promotes is a good one: don't impose a community of practice based on a predetermined epistemology, especially one that is reductionist and devoid of narrative...rather, look at how productive human networks form naturally and spend your time and energy discerning the conditions in which these can develop.

Reflecting on Intersections with Knowledge Management, Dave Snowden, and Singapore’s Risk Assessment and Horizon Scanning System

Warning: This post starts out a bit far afield from clinical work. My ideas about how it ultimately connect back, but they're still forming, so this is definitely a "put on your seatbelt" kind of post.

For some time, I have been following the work and blog of Dave Snowden, founder of Cognitive Edge. Snowden is an scientist, theorist, and organizational consultant at the cutting edge of the Knowledge Management (KM) field. Or perhaps it would be more accurate to say that Snowden is a pioneer and visionary who is try to push KM to an entirely different dimension (call it KM 2.0). I must admit that I am still trying to get a handle on Snowden's thinking (it's broader and more complex than I can yet grasp), but one of the most interesting things to me about his work is that he emphasizes narrative (versus purely numerical) approach to "sensemaking." Snowden and others of his ilk argue that you can learn more useful information, detect more weak signals, capture trends earlier through gathering stories than you can by gathering numbers. Stories show emerging trends. Numbers tell you what has already happened.   (For a popular version of this argument see Lori Silverman's provocatively titled book "Wake me up when the data are over: How Organizations Use Stories to Drive Results")

Snowden and another KM guru, Gary Klein, were recently videotaped discussing the methodology (and software) that the Government of Singapore has developed to help them detect terrorist risk, the Risk Assessment and Horizon Scanning (RAHS) system. I found their videotaped discussion fascinating, especially Snowden's critique on the failures of knowledge management (2nd clip on the page). I don't know enough to understand the differences between the perspectives Klein and Snowden offer (and, can't in fact follow all of what either one says), but I listened with great interest to their perspective on how one approaches information-gathering, sensemaking, and decision-making in an uncertain, unpredictable, and unstable environment.

Obviously, clinical sensemaking and decision-making is quite different from government counter-terrorism operations. But I could not help but think of parallels, especially for assessment of suicide risk. Here are a couple of developing (and somewhat random) reflections I had:

  1. We know about statistical risk factors, but how do we do sensemaking with a particular person's set of stories. Clinicians have access to rich narratives, but we generally lack methodologies and technologies for sensemaking that retains complexity and guides decision making.

  2. Traditional documentation (the principal knowledge management system for clinical care), including the diagnostic evaluation reports, usually flatten the richness of stories (by design) into a language that is more technical, linear, and sterile than real life. We usually don't capture stories on their own or track raw data, but rather we move quickly to interpretation and synthesis.

  3. I noted in a previous post that I use mindmapping to teach about suicide risk. In that post, I suggested one benefit might be "it helps to be able to visualize connections between concepts on a map because it makes complex material more accessible." In light of what I'm learning from Snowden and KM, I wonder if mindmapping also facilitates sensemaking from narratives better because it is nonlinear and attempts to replicate connections in human thought patterns.

  4. Apropos of my previous post, Where's the family?...family therapy offers an opportunity for gathering anecdotes from multiple perspectives. Snowden has a KM exercise called "Anecdote Circles," which he uses to help organizations gather information through story. The techniques he uses would be interesting to apply to a family, and to gathering information from family members about suicide risk. This kind of raw data is not available without family members.

  5. Our models and language around risk assessment needs to better reflect how fluid and unstable the phenomena of risk and suicidality really are. The act of suicide is a momentary coalescing of a multitude of snippets and anecdotes and narratives. Reading retrospective case studies of people who died by suicide makes that really clear--all of what we categorize as "risk" comes together in a certain way at a certain point in time. As one of my mentors pointed out to me last week, we can "predict" suicide retrospectively, but it is almost impossible to detect prospectively.  As clinicians we want to be sensitive to the snippets, so that we can scan the horizon (a la RAHS) and sense emerging trends, far before the data ever catches up.


As I warned in the beginning, these thoughts are pretty raw, but I'm interested in exploring this intersection more.