Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Uveitis Fundamentals and Clinical Practice 4th edition_Nussenblatt, Whitcup_2010.pdf
Скачиваний:
1
Добавлен:
28.03.2026
Размер:
53.26 Mб
Скачать

 

P a r t   2 Diagnosis

5 

 

 

 

 

 

 

 

 

Key concepts

Mistakes in ordering and interpreting diagnostic tests can lead to misdiagnosis and inappropriate therapy.

Diagnostic tests should be ordered to narrow down the differential diagnosis.

Clinicians must know the sensitivity and specificity of the diagnostic test to avoid misinterpretation of the results.

A few diagnostic tests are both highly sensitive and highly specific and may therefore be useful as a screening test for patients with many forms of uveitis. The FTA-ABS test for syphilis is an example of a diagnostic test often used as a general screening for patients with uveitis.

Assessing the likelihood of disease before the   diagnostic test is crucial in determining the likelihood of disease after either a positive or a negative   diagnostic test.

Tests including fluorescein angiography and ocular coherence tomography are helpful in assessing response to therapy.

Some diagnostic tests, such as bone mineral density studies, help to limit side effects of therapy and are now part of the standard care of patients on systemic antiinflammatory therapy.

What diagnostic tests should you order in the evaluation of the patient with uveitis? This is one of the most difficult questions we are asked. It is clear, however, that a nonselective approach to testing is costly and inefficient and provides information that is often irrelevant or, worse yet, that may lead to an incorrect diagnosis and inappropriate therapy. It is important to understand how to interpret diagnostic data because this information will help the clinician to order the appropriate tests.

Why does the clinician order diagnostic tests? Usually diagnostic tests are ordered to aid in making the correct diagnosis. Unfortunately, many clinicians are overly influenced when positive or negative results for a diagnostic test come back from the laboratory. A clinical example will serve to illustrate this point. A 34-year-old African-American woman from Texas presents with an intermediate uveitis in both eyes that has been present for the past 7 months. There is no history of rash, arthritis, or fever, but the patient does complain of wheezing and shortness of breath on exertion.

Diagnostic Testing

Scott M. Whitcup

The ophthalmologist orders a battery of diagnostic tests, including a serologic test for Lyme disease that has a positive result. Of course, the ophthalmologist is ecstatic in diagnosing the patient’s condition and treats her with a 2-week course of ceftriaxone. There are only three problems with this scenario: the patient probably does not have Lyme disease, did not need the expensive 2-week course of intravenous antibiotics, and more likely has sarcoidosis that is not being treated!

Before one can appropriately interpret the results of a diagnostic test, three pieces of information are needed. First, one needs to know the sensitivity of the diagnostic test (Fig. 5-1). This is calculated by dividing the number of patients who actually have the disease and who on testing have a positive result, by the total number of patients with the disease who are tested. Another name given to patients with a disease who have a positive test result is true positives: they have a positive test result and actually have the disease. Patients who have the disease but who have a negative test result are called false negatives. Many of the commonly used serologic tests for Lyme disease have a sensitivity of 90%. What does that mean? It means that if 100 patients with Lyme disease were tested, 90 would have a positive result (true positives), but 10 would have a negative result (false negatives). Furthermore, many diagnostic tests have varying sensitivities based on the stage of the disease. For example, Lyme serologies are less sensitive during the acute stage of the disease.

The second piece of information you need to have to interpret a diagnostic test result is the specificity (Fig. 5-1). The specificity of a diagnostic test is calculated by dividing the number of patients who do not have the disease in question and who have had an appropriately negative test result, by the total number of people without the disease who are tested. People who do not have the disease and who have a negative test result are called true negatives. Similarly, people who do not have the disease but who have a positive test result anyway are called false positives. In the case of the serologic test for Lyme disease, the specificity is also 90%. This means that if 100 patients without Lyme disease take this test, 90 will have an appropriately negative result, but 10 will have a misleading positive result!

Pretest likelihood of disease

The third and critical piece of information needed for test interpretation is often ignored by many doctors. This piece of information is called the pretest likelihood of the disease

 

Part 2 Diagnosis

 

Chapter

5 Diagnostic Testing

 

 

 

Disease present

Disease absent

 

Test positive

True positives (a)

False positives (b)

 

Test negative

False negatives (c)

True negatives (d)

Figure 5-1.  Sensitivity and specificity of diagnostic tests. Sensitivity = a/a + c. Specificity = d/b + d.

and is defined as the chance that the patient has a particular disease before the diagnostic test is ordered. The pretest likelihood can be based on a number of factors, such as the patient’s history and physical examination and the incidence of a particular disease in that area. This is the figure that most depends on the clinician’s prowess and ability: the more accurate the physician’s calculation of the pretest likelihood of disease, the more accurate the subsequent interpretation of the test result will be.

What is the pretest likelihood of Lyme disease in the case of the 34-year-old woman from San Antonio with intermediate uveitis who has no other symptoms and signs of Lyme disease and who does not live in an area endemic for the disease? The prevalence of Lyme disease in San Antonio, Texas, is probably less than 1 in 1000, and with no other evidence of the disease the pretest likelihood of the disease would probably be less than this. But let us be generous and say that the pretest likelihood of this patient having Lyme disease is 1 in 1000 or 0.1%. How do we interpret her positive test result for Lyme disease?

The likelihood that the diagnosis of Lyme disease is correct in this patient can be calculated because we now have the sensitivity of the test (90%), the specificity of the test (90%), and the pretest likelihood of the disease (0.1%). This calculation of what is called the post-test likelihood of disease is carried out with the use of a formula derived by the mathematician Bayes and is called Bayes’ theorem. The standard form of Bayes’ theorem states the following:

Post-test probability =

Pretest probability × sensitivity

(Pretest probability × sensitivity)

+ (1 pretest probability)(1 specificity )

Bayes’ theorem has been understood for two centuries but has only been applied to clinical reasoning over the past 30 years.1–5 Although formulas may appear daunting to some clinicians, computer programs and nomograms have been developed to help the clinician interpret the data.6,7 So what is the likelihood that our patient has Lyme disease, given her positive laboratory test result? With Bayes’ theorem the chance that she has Lyme disease is still only 0.9%, or a chance of 9 out of 1000! Although this represents an almost 10-fold increase in likelihood compared with the pretest likelihood, because there was a very small chance that she had Lyme disease before the test, she still probably does not have the disease. Knowing that the post-test likelihood of the patient having Lyme disease is less than 1%, the clinician probably would not opt to treat her with antibiotics.

Diagnostic tests are also not as useful if there is a very strong likelihood that a patient has the disease before the test is ordered. If this same patient came from Lyme, Connecticut, had a history of a tick bite followed by an erythematous, round rash, and now presented with an

 

1.00

 

 

 

 

 

 

0.80

 

 

 

 

 

Sensitivity

0.60

 

 

 

 

 

 

 

 

 

 

 

 

0.40

 

 

 

 

 

 

 

 

 

 

Minimum one

 

0.20

 

 

 

Minimum two

 

 

 

 

Minimum three

 

 

 

 

 

 

 

 

 

 

Minimum four

 

 

 

 

 

Minimum five

 

0.00

 

 

 

 

1.00

 

0.00

0.20

0.40

0.60

0.80

 

 

 

1-specificity

 

 

Figure 5-2. Receiver operating characteristic (ROC) curve for the requirement for each additional number of ocular features required to make a diagnosis of ocular sarcoidosis. The area under the ROC curve is greatest (0.84) for requiring a minimum of two ocular features to make the diagnosis, with a sensitivity of 84.0% and a specificity of 83.0%. (From Asukata Y, Ishihara M, Hasumi Y, et al. Guidelines for the diagnosis of ocular sarcoidosis. Ocul Immunol Inflamm 2008; 16: 77–81, with permission.)

intermediate uveitis and arthritis, even without testing she would probably have a greater than 99% chance of having the disease. Even if the result of her serologic test for Lyme disease was negative, after applying Bayes’ theorem the patient would still have about a 99% chance of having the disease!

Diagnostic tests are most helpful when the pretest likelihood of the disease is about 50%. For our patient with intermediate uveitis, if after our initial assessment we thought that her chance of having Lyme disease was 50%, a positive serologic test result would increase the post-test likelihood of the disease to 90%. So in this case, we start with a 50 : 50 chance of Lyme disease but end up with Lyme disease being by far the most likely diagnosis.

Receiver operating characteristic (ROC) curve

Many diagnostic tests involve establishing a numerical cutoff, above which a patient is felt to have a ‘positive’ test and hence is more likely to have the disease. Where you set that cut-off affects the sensitivity and specificity of the test and determines the number of false positive and false negative test results. Unless a test is 100% sensitive and 100% specific, the more sensitive it is the more likely you are to get false positives. The sensitivity of a test can be graphed against 1-specificity of the test to obtain what is called the receiver operating characteristic (ROC) curve (Fig. 5-2). The performance of a diagnostic test can be quantified by calculating the area under the ROC curve. Importantly, the ability of two continous variables to diagnose a disease can be distinguished by comparing the two ROC curves and the area under these curves, and determining whether this difference is statistically significant.8,9 If so, the test with the greater area under the ROC curve may be more discriminating.

60

Соседние файлы в папке Английские материалы