Skip to main content

Cognitive Assessments

Assessing cognitive functioning in students identified with or suspected of having autism spectrum disorder (ASD) informs the DSM-5 (American Psychological Association, 2013) diagnosis in terms of assigning appropriate specifiers. Importantly, cognitive profiles are not “diagnostic” for ASD, but assessment of cognitive abilities and processes is helpful for enhanced understanding of individuals of all ages with the diagnosis (Gerber, 2015; Holdnack, Goldstein, & Drozdick, 2011). Within school-based evaluations, assessment of cognitive functioning can yield valuable data for understanding student strengths and needs, and potentially offering insight useful for multidisciplinary teams to use when developing comprehensive instructional programs.

View Evaluations

Overview

Assessing cognitive functioning in students identified with or suspected of having autism spectrum disorder (ASD) informs the DSM-5 (American Psychological Association, 2013) diagnosis in terms of assigning appropriate specifiers. Importantly, cognitive profiles are not “diagnostic” for ASD, but assessment of cognitive abilities and processes is helpful for enhanced understanding of individuals of all ages with the diagnosis (Gerber, 2015; Holdnack, Goldstein, & Drozdick, 2011). Within school-based evaluations, assessment of cognitive functioning can yield valuable data for understanding student strengths and needs, and potentially offering insight useful for multidisciplinary teams to use when developing comprehensive instructional programs.

There is no one “typical” cognitive profile of a person with ASD due to the heterogeneity of the diagnosis, yet some studies have identified common patterns. For example, Coolican, Bryson, and Zwaigenbaum (2008) found nonverbal IQ scores to be consistently higher than verbal IQ scores in an ASD sample, with no age effects. Among adolescents and adults with ASD, variability of overall cognitive functioning has been found depending on severity of the diagnosis, particularly with regard to perceptual reasoning and working memory (Holdnack et al., 2011; McGrew, Schrank, & Woodcock, 2007). Processing speed is often lower for persons with ASD (Bardikoff & McGonigle-Chalmers, 2014; Holdnack et al., 2011; Kuriakose, 2014; McGrew et al., 2007). In addition, executive function is impaired in ASD- Demetriou et al., 2018, and this is true in preschoolers with ASD, as well (Smithson et al., 2013).

Parents often ask about long-term outcomes for their children with ASD, and while cognitive ability is related to many long-term outcomes, clinicians should communicate carefully regarding prognosis of cognitive ability to parents of young children with ASD. For example, though NVIQ has been found to be generally stable for the majority of individuals with ASD whose cognitive ability is in the average range, nonverbal cognitive scores in individuals with ASD may decline between toddlerhood and adulthood, particularly among those scores are below 70 (Bishop, Farmer, & Thurm, 2015). In other words, it is not possible for clinicians to accurately predict long-term outcomes for individual children with ASD, though sharing information about current research in this area may be useful for some parents.

Instruments to measure cognitive abilities and processes span all age ranges and levels of cognitive functioning. Within this section of the TARGET manual, various types of measures are summarized. These include not only instruments designed to general cognitive ability, but also measures of cognitive processes (i.e., executive functioning, phonological processing, memory, reasoning, etc.). Nonverbal measures, which require no spoken language by the examiner or student, are also included in this section, as they may be appropriate for students demonstrating limited language ability or limited English proficiency. However, use of nonverbal measures alone are likely to yield results that do not best represent child functioning (Aiello, 2013).

Cognition and cognitive ability are complex, and instruments designed to measure the “same” constructs may actually measure nuanced processes and abilities. Examiners are cautioned against assuming direct comparability between instruments (Kuriakose, 2014). In addition, examiners should consider the goals of data gained from cognitive assessment when selecting instruments; in particular, selection of measures that will enhance understanding of each student and inform individualized instructional decision-making are paramount.

Included within this section of the TARGET is summary information about the following instruments for cognitive ability, functioning, and/or processing:

  • Behavior Rating Inventory of Executive Function- Preschool Version (BRIEF-P)
  • Behavior Rating Inventory of Executive Function- Second Edition (BRIEF-2)
  • Cognitive Assessment System- Second Edition (CAS2)
  • Comprehensive Test of Phonological Processing- Second Edition (CTOPP-2)
  • Detroit Tests of Learning Abilities- Fifth Edition (DTLA-5)
  • Differential Ability Scales- Second Edition (DAS-II)
  • Kaufman Assessment Battery for Children- Second Edition Normative Update (KABC-II NU)
  • Leiter International Performance Scale- Third Edition (Leiter 3)
  • NEPSY-II
  • Reynolds Adaptable Intelligence Test (RAIT)
  • Stanford-Binet Intelligence Scales- Fifth Edition (SB-5)
  • Universal Nonverbal Intelligence Test- Second Edition (UNIT-2)
  • Wechsler Abbreviated Scale of Intelligence- Second Edition (WASI-II)
  • Wechsler Adult Intelligence Scale- Fourth Edition (WAIS-IV)
  • Wechsler Intelligence Scale for Children-Fifth Edition (WISC-IV)
  • Wechsler Nonverbal Scale of Ability (WNV)
  • Wechsler Preschool and Primary Scale of Intelligence- Fourth Edition (WPPSI-IV)
  • Wide Range Assessment of Memory and Learning- Second Edition (WRAML-2)
  • Woodcock-Johnson Tests of Cognitive Abilities- Fourth Edition (WJ-IV COG)

The summary of academic achievement assessments included in this section is not intended to be all-inclusive. Rather, the assessments were selected based on their prevalence within clinical and academic settings as well as their relevance to children with ASD.

No instruments reviewed within this section were specifically developed to assess cognitive ability or processes in persons with ASD. However, research that informs the selection and/or interpretation of instruments contained in this section with the ASD population is overviewed below.

Much of the research specific to the ASD population with regard to use of cognitive assessment instruments focuses on comparability of scores across tests. The DAS-II has been found to yield significantly higher scores than WISC-IV scores, which were attributed to relative weaknesses in processing speed among the ASD group (Kuriakose, 2014); processing speed is measured on Wechsler tests but not on the DAS-II. Similarly, Bardikoff and McGonigle-Chalmers (2014) underscored the need to isolate timing/processing speed criteria when using the WISC-IV in ASD. These researchers compared the KABC-II NVI to the nonverbal components of the WISC-IV’s PRI and PSI, and though they found no significant group differences for the PRI subscale of the WISC-IV nor for the NVI subscale of the KABC-II, significantly lower scores for the PRI vs. NVI for the ASD group were found. Moreover, these researchers found that the ASD group scored significantly lower on the PSI of the WISC-IV. Overall, processing speed is likely to be impacted in persons with ASD and examiners should consider this when selecting and interpreting assessment results.

In addition, DAS-II and Mullen Scales of Early Learning (see Developmental Assessment section for summary on this test) are highly correlated, suggesting these measure a similar construct (Farmer, Golden, & Thurm, 2015). Notably, however, even when cognitive tests scores are highly correlated between groups of individuals with ASD, variability among individual children must be considered (i.e., comparability of test scores still cannot be assumed) (Farmer et al., 2015; Kuriakose, 2014). In other words, correlation of test scores does not mean an absence of significant differences on obtained scores. For example, scores of students on the SB-5 and WISC-IV were found to be correlated, yet the obtained FSIQ and VIQ scores differed significantly between the two tests (Baum, Shear, Howe, & Bishop, 2015).

Moreover, clinicians should not assume complete comparability of scores obtained on tests from revision to revision. Specifically, Tsatsanis et al.’s work (2003) resulted in their conclusion that the Leiter-R was useful for children with autism but noted that greater clinical success may be achieved using the original Leiter for very low functioning and severely affected children, particularly young children. Also with regard to assessment of assessing cognitive ability among lower functioning individuals (i.e., those with Intellectual Disability [ID]) examiners should be aware that scores on the SB-5, specifically, may be affected by floor effects, leading to erroneously flat profiles (Sansone, Schneider, Bickel, Berry-Kravis, Prescott, & Hessl, 2014). Even in a comparison of two different nonverbal intelligence tests, strong and positive correlations were found scores on the UNIT and the Test of Nonverbal Intelligence-Third Edition (TONI-3; Brown, Sherbenou, & Johnsen, 1997), but performance of children with ASD was nuanced in that they scored better on the TONI-3 (an abstract measure of intelligence) compared with the Abstract Reasoning subscale of the UNIT (a measure of both abstract reasoning and real-world knowledge).

Relationships between nonverbal measures and traditional cognitive assessment instruments have also been investigated in the ASD population. Scores from nonverbal measures generally tend to be higher than from traditional measures. More specifically, scores from the Leiter-R tend to be significantly higher than SB-5 scores (Grondhuis et al., 2018), particularly among young children (Grondhuis & Mulick, 2013). These authors concluded that the Leiter-R and SB-5 are not equivalent measures of intellectual functioning in children with ASD and using only one could result in misclassification of intellectual ability. Similarly, Aiello (2013) found UNIT scores to be significantly higher (i.e., on average, more than 10 points) than WISC-IV scores. Using both traditional and nonverbal IQ tests might better capture the level of functioning and needed educational supports among children with ASD.

Some researchers have investigated use of neuropsychological measures within the ASD population. With regard to the NEPSY-II, Barron-Linnankoski et al. (2015) found that compared with typically developing peers, elementary-age children with high-functioning ASD had higher verbal reasoning skills but performed significantly worse than TD children in set-shifting, verbal fluency, and narrative memory. Interestingly, no differences were found in terms of social perception, including both the Theory of Mind and Affect Recognition subtests. Moreover, no correlation was found between FSIQ and impaired neurocognitive functioning on the NEPSY-II in the HFASD group, which “supports the usefulness of the NEPSY-II for different clinical groups” (p. 68).

The NEPSY-II measures multiple cognitive processes, including executive function, in a direct (i.e., clinician-administered format). However, informant-based measures of executive function (e.g., BRIEF) have been found to be sensitive for detecting executive control impairments in real-world settings for preschoolers with ASD (Smithson et al., 2013) and also to differentiate between children with ASD and typically developing controls (Demetriou et al., 2018). Such ecologically valid measures may be most appropriate for practice, including diagnostic and intervention frameworks.

Regarding abbreviated battery cognitive scores, Twomey and colleagues (2018) found that the SB-5’s ABIQ correctly identified overall level of cognitive functioning for approximately 80% of preschoolers with ASD. However, clinicians should use cognitive scores generated on the basis of abbreviated batteries as screeners and not as a substitute for a comprehensive score, as these scores may misrepresent the cognitive ability of some persons with ASD (Coolican et al., 2008), particularly those with lower cognitive functioning (Twomey et al., 2018).

Misconceptions

Myth:

Full-scale IQ is a good description of a student’s cognitive ability.

Reality:

Students with autism typically demonstrate a scattered profile on comprehensive cognitive measures, performing better on tasks involving rote skills than on tasks involving problem solving, conceptual thinking, and social knowledge (Mayes & Calhoun, 2008; Meyer, 2001-2002).

Myth:

If a student has an average IQ, an adaptive behavior measure is unnecessary.

Reality:

Although a student has an average IQ and may even be doing well academically, it does not mean that an adaptive measure is not necessary. Research indicates that many students with autism have deficits in communication, daily living skills, and socialization (Lee & Park, 2007; Myles et al., 2007). Klin and Volkmar (2000) stated that adaptive behavior is a critical area of planning for students with Asperger Syndrome (now referred to as autism spectrum disorder, Level 1) to facilitate transition from the school environment to work and community environments.

Myth:

If a student demonstrates a well-below-average IQ, the student does not have any cognitive skills.

Reality:

A flat profile of skills may indicate difficulty accessing what the student knows. Formal cognitive assessments may not yield valuable information for assessing current level of functioning and needs for programming. In addition, students with autism spectrum disorder may not be able to generalize skills from the classroom setting to the testing environment, or the manner in which the information is being assessed may prohibit the child from demonstrating mastery of skills. For example, if the student has learned to perform a task in one way with a certain prompt and the assessment asks for it in a different way, the student may not be able to demonstrate knowledge of the skill.

Myth:

Formal IQ is more valid than informal data from the classroom.

Reality:

Informal classroom data provide information about how the student functions on a daily basis. Analyzing formal and informal data to determine patterns of skills and learning is a key component of assessment (Hagiwara, 2001-2002). Informal data from the classroom may be more valuable than information gathered in a contrived one-on-one setting when determining programming for a student with autism spectrum disorder.

Myth:

If a student has a high IQ or demonstrates high achievement, he or she should be successful in the general education classroom.

Reality:

Because students with autism spectrum disorder have difficulty with language, communication, and social skills, they may continue to struggle in the general education classroom in activities that involve these skills.