ADVERTISEMENT
Do We Need a Blood Test for Depression?
Question:
"Why doesn’t psychiatry use the serial levels of BDNF and the cytokines as strong markers for the diagnosis of depression and in the assessment of the effectiveness of treatment as it is done for cardiac enzymes in the diagnosis of MI?"
I am glad you asked this question because it actually touches upon what I consider to be the central conceptual shift we need to make in our understanding of psychiatric disease, in general, and major depression, in particular. As a start to answering this, let’s ask another question: Why would we want to get serial levels of bodily chemicals, such as BDNF (brain-derived neurotrophic factor) or cytokines? If you think about it for awhile I suspect you will agree that the only reason to go through all the trouble would be because measuring the chemicals would tell you something you can’t “see with the naked eye.” What sort of questions would be important for any blood test for depression to answer? In general, three things: 1) Does the patient have a certain illness; 2) Is there a specific treatment that would be best suited to this particular patient; and 3) What is this patient’s prognosis—what can she or he expect from the disease?
Let’s start with the first question. The hope that someday we’ll have a blood test for depression is one of the longest-standing fantasies of our field. I want to suggest that it is profoundly misguided because it really misunderstands the type of entity depression is. Can you cut depression out of a person like a tumor and show it to me? Of course not. It is a probabilistic syndrome. It is a chronic tendency for a person to experience dark emotions, loss of interest, and any of a number of related cognitive and neurovegetative symptoms. There is no depression underlying the symptoms: it is the symptoms. If a person has the symptoms, they are depressed by definition. If this is so, why would you need a blood test to confirm what you can see with your own eyes? Suppose a person came to your office weeping, crying, filled with pathological guilt, and a host of neurovegetative symptoms anxious to kill himself/herself and end his/her misery. If there was a blood test for depression and it was negative, would you send the patient away untreated?
What if a person had an abnormal depression test, but was perfectly normal in terms of mood and behavior? Would you treat them? Do you see that this is really the only situation in which a blood test would be useful—to show an underlying risk for developing depression in someone who is normal? How accurate would such a test have to be to be useful? What if normal people with a value above a certain cut-off for BDNF or a cytokine had a 20% chance of getting depressed in the next two years? Would you pre-treat them? What if they had a 50% chance? Such a test would be statistically hugely powerful, but might still not be accurate enough for clinical use.
Per your question, the reason cardiac enzymes are so useful for diagnosing a myocardial infarction (MI) is because you can’t see an MI so you need some other way of identifying it, and because the enzymes are both specific and sensitive. They don’t go up unless you’ve had an MI and not much else makes them go up. Note also that the term “myocardial infarction” describes a state of observable tissue damage, not a group of symptoms. If depression similarly defined a simple form of brain damage, we might be better able to find such a simple test. However, depression is complicated in multiple ways. First, what we call depression is likely to reflect a whole passel of subtly different physical disorders. Second, in any given person, multiple pathways that interact in complicated ways are likely to be subtly abnormal, making it almost impossible that any given abnormality will provide a test proof positive of anything.
That’s the first important question we’d want a blood test to answer. The second question invokes the hope that some type of physical measure will tell us ahead of time what specific types of treatment a given patient will respond to. This is not something that can be seen “with the naked eye” so obviously any such test would be hugely useful. Not surprisingly, therefore, many people have been searching for ways to predict individual response patterns, a pursuit fashionably called “individualized medicine.” Approaches that have been explored include quantitative EEG, genetics, and neuroimaging. Several studies suggest that patients who fail SSRIs have higher levels of proinflammatory cytokines in their blood than do patients who respond, 1 which raises the possibility that these chemicals might provide guidance for predicting treatment response, but this has never been examined. In general, results for predictive methodologies that have been examined have not been very promising. In my opinion, it is unlikely that measuring any single chemical (or multiple chemicals) in the blood will ever provide an accurate enough picture of how to treat any given patient to be clinically useful, but I’m a pessimist and I might be wrong.
Finally, a blood test for depression would be extremely useful if it could tell us what was likely to happen to particular patients in the future over and above what we can predict from symptoms and disease course to date. For example, wouldn’t it be useful to have a test that could take two patients who had both responded to an antidepressant and identify one who will do well long-term and the other who will relapse quickly despite appearing identical symptomatically to the first patient? As it turns out there is a biological test that has shown significant promise in this regard, but it is not a simple blood test. The dexamethasone-corticotrophin-releasing hormone stimulation (DEX-CRH) test provides a measure of how sensitive the hypothalamic-pituitary-adrenal (HPA) axis is to the inhibitory effects of cortisol.
Patients with depression tend to be less sensitive to this inhibitory feedback than normal individuals, which is also the basis for the more famous dexamethasone suppression test and works on the same principal. 2 Several studies have shown that patients who are resistant to cortisol and who do not become sensitive following antidepressant treatment are much more likely than others to relapse, regardless of any symptomatic improvement. 3,4 There are a couple of caveats. First, the association is not one-to-one, so it is not an exact measure of risk. And second, the test requires a number of hours, multiple blood draws, and the administration of CRH, making it cumbersome. Whether other simpler blood measures will be shown to have prognostic value following treatment is an open question. In general, treatment corrects (or improves) biochemical abnormalities when it works, but we know significantly less about the increased risk for relapse posed by any particular chemical not normalizing.
In summary, I think that diagnostic blood tests will probably remain in the realm of the unicorns; tests for individualizing treatment are being actively investigated, but face multiple challenges in terms of ever being clinically useful enough to justify their inclusion in our armamentarium. Tests to identify long-term outcomes may, in fact, be the most promising first use for biological tests.
References
- Miller AH, Maletic V, Raison CL . Inflammation and its discontents: the role of cytokines in the pathophysiology of major depression. Biol Psychiatry . 2009;65(9):732-741.
- Raison CL, Miller AH . When not enough is too much: the role of insufficient glucocorticoid signaling in the pathophysiology of stress-related disorders. Am J Psychiatry . 2003;160(9):1554-1565.
- Zobel AW, Nickel T, Sonntag A, et al . Cortisol response in the combined dexamethasone/CRH test as predictor of relapse in patients with remitted depression. A prospective study. J Psychiatr Res . 2001;35(2):83-94.
- Ising M, Horstmann S, Kloiber S, et al . Combined dexamethasone/corticotropin releasing hormone test predicts treatment response in major depression - a potential biomarker? Biol Psychiatry . 2007;62(1):47-54.