No test for detecting cancer is 100 per cent accurate, but best practice should be able to counteract any uncertainty, write Cathal Walsh, Alan Kellyand Shane Allwright
Considerable anxiety has been aroused recently about missed cases of breast cancer.
Much of the public commentary has been about trying to identify what went wrong in the system, who knew about it, and how they may be held accountable. But there is another angle we should consider.
As pointed out in this paper's Medical Matterscolumn and on the letters page, the tests that we have in this area are imperfect.
It can be extremely difficult to distinguish between "normal" and "abnormal" images and this is reflected in the observed error rates.
Indeed, a recent paper in the Journal of the National Cancer Instituteindicates that when properly carried out, this test will pick up the disease in only about four out of five patients with cancer.
To someone outside of the field, this level of ascertainment may seem low. Indeed, our experience with teaching senior Trinity medical students reveals that this comes as a surprise to them also. However, it is typical of tests of this nature.
Thus, a "positive" or "negative" test result can only be treated as uncertain information about the true underlying disease status. What is critical is that the test result is used carefully when deciding how to manage a particular patient.
As scientists, we spend much of our time dealing with uncertainty. In a clinical setting, physicians deal with this by incorporating the test information with their own expertise and personal knowledge of the patient. Given the result of a diagnostic test, they can then, together with the patient, decide on the next stage of treatment.
Using the data presented in the article by Miglioretti et al, in the Journal of the National Cancer Institute, the routine calculations can help inform our decisions. If someone tests negative on mammogram, then they are almost 99 per cent sure that they do not have cancer. This is reassuring, but not the absolute certainty we would like.
Without invasive procedures we can rarely be 100 per cent sure that someone has not got cancer. But diagnostic tests provide us with a tool for highlighting the most suspicious cases. And even if someone with breast symptoms tests negative, they will continue to be monitored for new symptoms or changes.
In considering how to improve diagnostic services, there needs to be an understanding that we will never have perfection. There are two areas that deserve our attention.
The first is that women going to breast clinics with health concerns should be provided with additional information about testing, access to services and input into decision-making about their own care.
The second is that we need to deploy resources in the system on the basis of evidence, rather than on the basis of speculation about what may have happened or could have happened.
Collating information about the number of tests carried out throughout the State, and the current detection rates at each centre, will be much more useful than trying to blame any particular individual for any possible error.
A sense of optimism about what we can achieve and a realism about what we cannot will help us work towards improving healthcare in an ever more demanding world.
Almost all diagnostic tests are subject to natural variability. There are differences between individuals, differences in the conditions that prevailed during the time that the test was being conducted and differences in the environment in which the test is being "read". This natural variability (think of it as introducing "noise") has the effect of degrading the "signal" that is being measured in a laboratory or clinical setting.
Thus when designing a testing procedure, we have to take this "noise" into consideration. This is done by calibrating the test so that it is able to give as accurate a result as possible. This accuracy can be estimated by examining how often it gives the "correct" result in a number of situations.
In particular, we can estimate how often the test returns a positive when an individual in fact has the disease; or how often it returns a negative when the individual does not have the disease.
We can then quantify our uncertainty by saying something about the chances that an individual has the disease, given the result of the test. This is referred to as the predictive value of a positive (or negative) test and is the crucial information that should inform a clinician's decision whether to treat.
These features of diagnostic tests are understood by many health professionals but can be difficult to communicate to a broader audience. Uncertainty is inevitable in this area. The best we can do is to choose the optimal treatment strategy on the test evidence that we have obtained. Regrettably, cases are missed. However, by better understanding the uncertainties, it is possible for doctors and patients to make better treatment decisions.
If someone tests positive, further investigation is required to make sure that these "suspicious" symptoms are investigated fully. If someone has tested negative, it is very reassuring, although any change in their symptoms will need to be monitored closely.
As the director of cancer control, Prof Tom Keane, told The Irish Timeslast month, "even the best quality cancer testing will never be 100 per cent accurate . . . No technology is 100 per cent accurate - it is a subjective science."
However, best practice, usually involving multidisciplinary approaches, ensures that the best possible decisions are taken about how to proceed in the face of uncertain information.
Prof Shane Allwright is associate professor of epidemiology and Dr Alan Kelly is a senior lecturer in biostatistics at the Department of Public Health and Primary Care, Trinity College Dublin. Dr Cathal Walsh is a lecturer in statistics at the Department of Statistics, TCD.