The authors reported unexpected findings from three separate studies that compared the efficacy of a family and non-family treatment. In brief, they found that family-level outcomes measured after applying non-family treatments didn't just remain static (as they had expected), they actually declined. This relationship is correlational and does not necessarily mean that the treatments in question caused the decline, but the authors argue that the findings are striking enough to raise the question about whether unintended side effects psychosocial treatments should be subject to "safety monitoring" along the lines that biomedical products are. Something like a black box label: "Warning: This treatment manual may be hazardous to your family."
In the discussion section, Szapocznik and Prado hypothesize about the systemic mechanism for the results they found:
"The family is a system that must be viewed as composed of interdependent or interrelated members.... Family members tend to develop habitual patterns of behavior over time such that each individual in the family is accustomed to act in a certain way that in turn elicits specific predictable behaviors from others. One possible hypothesis is that if an individual is changed by an intervention that is design to change individual and not help the family adjust to these changes....the family may be negatively affected...."
Nothing in these studies relate directly to suicide. But I think there are implications for how we think about intervention, especially in light of what I've been reflecting on lately about suicide as a family issue (see posts related to family therapy)
- Need for more systemic work on suicide. With respect to suicide, this article emphasized to me the need for greater conceptual clarity among systems thinkers about suicide in the context of the family system. We need to articulate in what ways suicidality might be a property of the system in which it resides, and what are the mechanisms by which family relationships might reduce the likelihood of suicide.
- The complexity of defining "evidence-based practice." I've posted before (vis-à-vis the ambulatory redesign aspirations in our department) about my concerns that "evidence-based" can get too narrowly defined. What is evidence-based depends a lot on what evidence you look at, and, more to the point here, on what outcomes are measured in the studies that provide supporting evidence for an intervention. Given the documented importance of family functioning for long-term outcomes of many kinds, perhaps one of the criteria we should consider in evaluating the utility of a given treatment approach is its ability to promote family functioning.
- This relates to suicide because of the ways in which I have heard distressed individuals conceptualize their presenting problem. When people seek help it is usually with a functional outcome in mind, often one that has to do with their relationships. Research studies measure symptom reduction, people care about love, work, and play. In delivering a human service, we should organize ourselves in congruence with human concerns. If we organize ourselves around "reducing depression" we run the risk that our language will become reified in our practice-the result of which could be a less connected stance toward a suicidal individual who sees his relationships, finances, or health as the primary problem, not his "symptoms." As one person I worked with paradoxically stated, "I don't care about feeling better, I just want all of these problems to go away."
Ideas around evidence-based practice are evolving. In our department, a vibrant conversation is underway. Simplistic views of what is evidence-based seem to be disappearing, as everyone realizes that "evidence-based" is a much broader and trickier term than we might like. Ultimately, I suspect that the way out of the dilemmas inherent in the term is for clinicians to collect evidence (in informal and formal ways) about change in their own cases. This kind of internal monitoring process will probably promote effectiveness more than selecting the right branded treatment, which may have aggregate data that allows it to be certified as "evidence based," but which may or may not be helping the particular individual and family we're working with.
Szapocznik, J. & Prado, G. (2007) Negative effects on family functioning from psychosocial treatments: A recommendation for expanded safety monitoring. Journal of Family Psychology. Vol 21, p. 468-478.
For clinicians assessing and managing suicide risk, the fact that phones installed on a bridge have been used by individuals who went on to live is testimony to just how much ambivalence remains, even in people who have gone very far toward resolved plans and preparatory behavior.
Understanding that ambivalence is key to clinical work with suicidal individuals. When I train clinicians about assessment and response to suicide risk, I often get questions about whether it is useful or even right to assess suicide risk. I'm also asked, "What about people who have good reasons for killing themselves or who rationally decide they want to end their lives?" My answer goes something like this:
Thankfully, for health care professionals there is no practical dilemma here. If you find out about a person's suicidal thinking, then there is some degree of ambivalence. Everyone knows that psychotherapy or primary care are about health...that is life. We're not about suicide and death. So if someone is coming to us, at least some small part of them is aligned in that direction. And it's our job to understand that ambivalence and work toward health and life until such time as the ambivalence is resolved in one direction or the other.
That line of thinking can apply to any person, really--not just healthcare professionals. Except in some rare circumstance that you'd have to work hard to construct, the fact that someone is still alive and letting someone know by words or action about suicidality reflects ambivalence.
The fact that people read signs and use phones on bridges also discourages a fatalistic stance on the part of clinicians. We can't simplify the matter by saying "If someone really wants to kill themselves they will, so what's the point of screening or assessing?" That question misses the point. We assess because people don't want to kill themselves. Some just don't see options for life and, under the wrong circumstances (like under the influence of substances or after a particularly deep emotional wound), they overcome their ambivalence just long enough to do the unthinkable. We need to have deep compassion for the amount of pain that must be, and nurture the life-embracing side of the ambivalence until the person can see options again.
But such news can provide a useful reminder to review the prototypes and heuristics clinicians have in our heads about suicide. Specifically, we need to resist the temptation to only think or ask about suicide in cases of depression. Although depression is present in a large proportion of people who die by suicide, suicide is by no means synonymous with depression. Anxiety disorders, personality disorders, and psychotic disorders are all associated with risk for suicide. This begins to make sense when you think about suicide often being a response to hopelessness, despair, agitation, and a feeling of being trapped (often with an overlay of substance abuse disinhibiting the person's symptoms and behavior). When put that way, it's not hard to see how chronic intense anxiety could lead to suicidal thinking (or action).
I think this is something many clinicians know, but old prototypes can be stubborn and often get in the way of us accessing what we know. When we refresh our thinking, we can more effectively remember to to ask about suicidal ideation in every case, not just when depression is prominent.
Murder-Suicide, Domestic Violence…Common threads in violence against self and others
Suicide turned outward: Times of London Article by Dewey Cornell
Erratum on previous post: Cornell not author, just interviewed
For those who are not aware of SAD PERSONS, it is a 10-item scale to purports to screen for suicide risk. An individual is given one point for each item for which he or she screens positive:
- Sex (male)
- Age less than 19 or greater than 45 years
- Depression (patient admits to depression or decreased concentration, sleep, appetite and/or libido
- Previous suicide attempt or psychiatric care
- Excessive alcohol or drug use
- Rational thinking loss: psychosis, organic brain syndrome
- Separated, divorced, or widowed
- Organized plan or serious attempt
- No social support
- Sickness, chronic disease
The word "simple" in headline of this Psychiatric Times article linked above captures what makes the tool sound appealing, especially for the thousands of health care systems that need a quick way to respond to the JCAHO patient safety goal 15 and 15A: "The organization identifies safety risk inherent in its client populations" and "The organization identifies clients at risk for suicide" (see this .pdf for explication of these goals).
From one perspective, there is nothing wrong with using acronym like this. It can remind clinicians (assuming they can remember what all the letters stand for!) of some of the risk factors and warning signs of suicide. Who can argue with that? However, from a training and clinical perspective, there are a few problems with this approach, especially when the screen is put forward as a scored scale. Let me summarize a few of these. Note that my thinking about some of these concerns is strongly influenced by concerns articulated by my senior (and very brilliant) colleagues in email exchanges we have had about this. I don't claim originality here, just summary:
- The "scale" assigns risk level on the basis of a point system: A score of 1 or 2 points indicates low risk, 3-5 points indicates moderate risk, and 7-10 indicates high risk. This approach works under the assumption that these factors are equally weighted. A separated, 46-year old male with diabetes with no depression would have a higher risk level (score=4, moderate), than 40 year-old married woman with chronic depression, current hopelessness who was just released from a psychiatric hospital after a near-hanging. (score=2, low risk).
- Having a risk "score" creates conditions for clinicians to rely on a number instead of developing an informed clinical formulation of risk.
- The suggestion that risk for suicide can be boiled down to a single number--even for screening purposes--presents a misleading picture of the complexity phenomenon and how to think about it as a clinician.
- The evidence that the linked article gathered does not correspond with the alluring headline, "Simple Screen Improves Suicide Risk Assessment." Evidence reported by those who conducted the study was that, after using the computerized screen, nurses tested showed more knowledge about risk factors for suicide. Of course, knowledge about factors is a long way from demonstrating improved assessment. Obviously, the physicians who reported their study at APA the study did not write the headline. The semantic overreach of the headline speaks to the understandable desire to find easy ways of doing hard things.
- Finally, from a training perspective, I find acronyms longer that 3 letters almost impossible to remember! SAD PERSONS particularly clumsy, and, IMHO a bit forced. "O" stands for "Organized plan or serious attempt" whereas I would probably make plan a "P" if I were trying to remember it, but of course that's already taken by "P" for "Previous." That often ends up being the problem with trying to make these things fit into an acronym. In a way, this gets back to the theme I've been harping on lately in my posts about teaching and training about needing a basic-science base about how clinicians learn, remember, and use principles or practices we learn. I'd imagine an expert in human memory could graph the inverse relationship between recall rate and number of letters in an acronym--add to that the need to recall these letters that signify words or concepts with high emotional impact.
Owner of Chinese Toy Factory Commits Suicide - New York Times.
Technology Transfer. Dr. Quinnett’s interest is technology transfer, i.e. taking what is known from the literature and clinical experience giving it legs for the working clinician and healthcare system. This the primary thrust of my evolving work, as well. I also have an interest finding the most efficient and effective pedagogical method for transferring information. This is where my interest in mapping and other forms of visual representations comes in (see my previous mapping posts). This topic is also part of what has interested me when I heard Wendi Cross speak (see my post reflecting on Organizational factors that support care of suicidal person).
Family involvement. I’ve posted several times (see Where’s the Family?, and At the crossroads of family therapy and suicide prevention) about the conundrum that family involvement presents for suicide risk assessment: we don’t have good models for talking about suicide with family members present, we don’t have clear ideas about how to incorporate families in the assessment process, AND in many cases it is impossible to imagine performing a worthwhile assessment and management plan without family input. Dr. Quinnett has been working on this very issue from two interesting perspectives. The first is what he called “the cost of data collection.” That is, he is curious about how clinicians perceive the cost of collecting information from 3rd parties. The second is that he is working on developing a protocol of the key questions and info one should ask/gather from family members to guide clinicians in their interviews. Dr. Quinnett has been working on this with Sergio Perez Barrero, MD, a psychiatrist in Cuba who founded the Suicidology Section of World Psychiatry Association and also the World Suicidology Net.Dr. Perez Barrero is a QPR trainer, who has translated the materials in to Spanish.
Drawing on experience in other fields that do risk assessment. In a previous post, (Reflecting on Intersections with Knowledge Management, Dave Snowden, and Singapore’s Risk Assessment and Horizon Scanning System), I shared my reactions to Dave Snowden’s work on detecting terrorist threats. Dr. Quinnett was struck in a similar way by Gavin deBecker’s work in threat assessment. I had not heard of deBecker but apparently his California firm, Gavin deBecker and Associates works with high-profile clients (including Hollywood celebrities) to analyzing potential threats to their safety. He has written a book called “The Gift of Fear,” which I plan to read on Dr. Quinnett’s recommendation.
Along similar lines, I have consulted with a forensic psychologist and friend, Daniel Murrie, Ph.D., who co-authored a book (with Mary Alice Conroy) coming out this fall about assessment of risk for violence, “Forensic Assessment of Violence Risk: A Guide for Risk Assessment and Risk Management.” This book, which I’ve seen excerpts of, presents an approach to assessment of risk for violence that is clear and accessible to clinicians and retains the richness and clinical complexity that appropriate to the challenging work of predicting an individual’s risk of being violent. The approach that Conroy and Murrie take has potential applicability for suicide risk assessment, for which we’ve never quite had such a clear model for conducting and writing assessments.
I guess the intersection here relates to seeing potential for developments in threat and violence prediction work to help our efforts to improve detection of suicide risk.
Desire to understand the clinician’s state of mind when faced with risk assessment. I have noted before (see my post on Visual maps and guides in high stress situations) that I’m interested in learning what the cognitive science would be related to how people best access information for decision making in high arousal situations. Similarly, Dr. Quinnett mentioned that he would like to test clinician perceptions about information gathering in risk assessment. What kind of cost/benefit appraisals do they make about asking questions and gathering collateral info?
In my view, the clinician’s state of mind/emotion and cognitive heuristics are underappreciated in most approaches to training about suicide risk. As I noted in my post about clinician anxiety (Clinician anxiety–what’s it about?), what we believe about the most pressing concerns for clinicians will influence what and how we teach. Likewise, understanding how clinicians learn best is important for modes of dissemination (for example, see my post on How clinicians learn: Web 2.0 Opportunities?).
Summary: “Needs Development.” This is another post I’ll tag “needs development” because much of this raises more questions than it answers. But reflecting on these conceptual intersections helps me to see how much is not known about how to approach training in suicide risk assessment. Really, there is a “basic science” set of questions about learning and the clinician mind that gets skipped over when we do the necessary and important work of evaluating educational interventions (which, of course, we don’t do enough of either!).