In this penultimate article in his series on method comparisons, Stephen MacDonald moves on from difference analysis and the contribution of Bland and Altman to consider qualitative methods and also the role of McNemar, Yates and Cohen.
The last three articles have been quite number heavy, with lots of graphs, as would be expected for investigation of quantitative assays. Qualitative assays are both similar and different. At their simplest, they categorise patients into normal/abnormal, diseased/non-diseased, positive/negative and a host of other binary classifications.
Classification is based on transforming a numerical measurement to a binary outcome based on a predetermined cut-off. Consequently, analysing method comparison studies is somewhat different from what we have seen in the last few articles. We are defining accuracy as how well the classifications agree. What form that classification takes (be it a condition accuracy – how well an assay can identify a disease process or not) or simply as an agreement between methods (when a disease state is not known) is dependent on the assay being compared.
Diagnostic accuracy comparisons involve testing our potential method against what is considered to be the state of the art for diagnosing a condition. This is not limited to only laboratory assay results and includes clinical assessment and results of other tests not performed in our laboratories, such as imaging. In cases where that data are not available, or we are simply comparing the performance of two methods, a method comparison is performed. Depending on the situation, only certain metrics can be reported if the diagnostic criteria are unknown. Instead, studies will produce measures that reflect the degree of agreement between methods.
Log in or register FREE to read the rest
This story is Premium Content and is only available to registered users. Please log in at the top of the page to view the full text.
If you don't already have an account, please register with us completely free of charge.