NASP or the National Association of School Psychologists recently published an article entitled: “Four Dyslexia Screening Myths That Cause More Harm Than Good in Preventing Reading Failure and What You Can Do Instead.” As many of you who are involved in dyslexia advocacy know, NASP holds a great deal of influence over school psychologists. At least in the past NASP probably contributed to reluctance of school psychologists to “say dyslexia” in schools (see 2007 NASP Position Statement here).

One of the authors of the article has made the preprint available for download here.

Some highlights: “At a time when schools are administering more screening to detect risk for reading failure than at any time in the history of education, it is interesting that legislative mandates are prescribing more reading screening in the name of better identification and treatment of dyslexia.

Given that most schools already conduct reading screening multiple times per year, often using multiple measures, it makes sense to revisit why all this screening may not be giving schools the desired return on investment that they are after. The purpose of this article is to equip school psychologists with an understanding of four common screening myths…”

The 4 myths VanDerHeyden and Burns mention are:

“1. More screening can only improve correct identification of students with dyslexia.

2. Failing to learning to read means the child most likely has dyslexia.

3. Screening accuracy for a published tool will be similar across schools.

4. Screening improves reading performance.”

While the ‘myths’ that are stated do point out reasonable assumptions that should be questioned regarding the new implementation of dyslexia screening in schools, it does not negate the benefit or potential benefit that comes from the screening process. One of the most significant points, I believe the authors make in this article is that “most dyslexia screener do not provide instructionally relevant data.”

In fact, VanDerHeyden and Burns take aim at the new Shaywitz Dyslexia screener that takes less than 5 minutes per student:

“The Shaywitz (2016) Dyslexia Screen is being used with increasing frequency and provides one example of the potential for errors in screening. The author is a leader in the field of dyslexia or reading disabilities, and using a screening like the Shaywitz Dyslexia Screen may feel like a tidy solution to a legislative mandate for dyslexia screening. However, such a solution is not tidy.

The estimates of sensitivity and specificity reported by the publisher for the Shaywitz scale were .73 and .71 respectively for kindergarten students, and .70 and .88 respectively for first grade, which would be considered somewhat low according to screening standards in education…The data in Table 1 suggest that if 100 students at each grade are identified as at-risk for dyslexia we will likely misidentify (false positive between 66 and 88 of kindergarteners and 46 and 77 of the first-graders, and will miss (false negative) 2 to 7 children who were actually dyslexic at each grade.

The second problem with adopting a single-point-in-time measure of risk like the Shaywitz rating scale screener is that it does not inform or prompt a change in instruction that can better meet the needs of at-risk students.”

There are legitimate concerns about a screener for dyslexia that takes less than 5 minutes to administer, but at the same time, the gold standard for comprehensive dyslexia testing is usually hours long and often requiring more than one day.. Can a reasonable alternative be found? We think that the call for dyslexia screeners to be providing more practical information is also reasonable.