The CANS/CANS-MH is a unique measure in that items are not intended to be summed or factored together. Each item represents a potential target of clinical intervention. As such, traditional psychometrics including internal consistency and factorial validity may not be applicable. Nevertheless, studies have examined psychometrics of CANS ratings, including interrater reliability.
INTERRATER
The most detailed CANS-MH interrater reliability study, Anderson & Huffine (2003), examined interrater reliability (intraclass correlations) with 60 randomly selected cases (children aged 7 days to 17.5 years). Over half of all coding differences did not affect treatment plan (e.g., were a difference of coding 0 vs. 1, or 2 vs. 3). Reliability was reported as follows:
1. Caseworkers and Researchers: Total Scale (.81), Problem Presentation (.72), Risk Behaviors (.76), Functioning (.85), Care Intensity and Organization (.75), Caregiver Capacity (.75), Strengths (.77).
2. Pairs of Researchers: Total Scale (.85), Problem Presentation (.84), Risk Behaviors (.82), Functioning (.85), Care Intensity and Organization (.77), Caregiver Capacity (.68), Strengths (.84).
3. Reliability between pairs of researchers is what is reported in the table, as these are the numbers that are most comparable with studies involving other measures.
4. The manual reports that for clinical vignettes, the average reliability across studies is .75. For case reviews or current cases, the average reliability is .85. Details are not given regarding the studies or the statistics used to assess reliability.
5. Rawal, Lyons, MacIntyre, & Hunter (2004) reported interrater reliabilities of .67-.87 across all raters in study using residential treatment data from four states.
6. Lyons, Griffin, Quintenz, Jenuwine, & Shasha (2003) reported the reliability of provider rated CANS-MH as .80 using audit reliability measures.
7. Lyons, MacIntyre, Lee, Carpinello, Zuber, & Fazio (2004) reported weighted interrater reliability across all reviewers and all items as .86.