DIBELS a pot of misery for a pound of gold
January 1, 1970
April 29, 2007DIBELS a pot of misery for a pound of gold.
My dear Newsletterites,
Test taking is hard. No doubt about it. Especially written tests. Most especially standardized written tests. When we are faced with such a test, it is a challenge to what we knew and it is a challenge of our ability to give the same right answers that others we have never met have given over and over.
Test making is hard. No doubt about it. Especially standardized tests that measure the literacy abilities of people who come from all sorts of different backgrounds, who speak different first languages, who have different value systems, and who have been given years of different kinds of schooling. Not only is standardized test making hard, it can be impossible to do for some kinds of reading and writing skills.
Each time we give a standardized test, we can bet somebody wishes school would disappear. If the test is not extremely beneficial to the test taker, why ever would we risk that thought?
Imagine, for a moment, the challenge to deliver to hundreds of little children a test that in half a day will give the same results it takes their classroom teachers months to discover. Imagine that if a person could design such an instrument (that is what it is called), schools could be identified as successful without regard to all those pesky variables like the child’s socio-economic status, the family make-up, the reading materials in the home, or the time the child spends thinking before writing. Imagine that there could be a test that would reveal news about a learner that was unknown to the classroom teacher who had been in close contact with the child five days a week for months on end. And, furthermore, what if there were a test that would eliminate the need to provide reading materials in every classroom, food for hungry children, medical areas for untended children, rest areas for the exhausted, and paper and pencils for those who needed it. Such a magical test is worth a gold mine.
So, with that pot of gold to tempt them, there are people who will claim to have just the kind of test the uninformed in funding agencies want to hear about. It is rather like the fairy tail about an emperor who bought extremely expensive clothes that no one could see. But, unable to admit no results for his il-purchased cover-up, he proceeded to march in public, showing off the objects of his folly. Many in the crowd were too embarrassed to admit they could not see what wasn’t there; so, they, too, pretended there were extraordinary outcomes…or words to that effect.
When we hear about wide-spread use of any test, we may be assured that someone is making a lot of money on the sale and distribution of that test (and is probably sharing the wealth with a chosen number, too). Widespread use does not make the test valid (Does it measure what it claims to measure?) or reliable (Would it give the same results again or to another group?). Widespread use simply makes a test well-marketed. Yet, widespread use appears to be the popular way of suggesting a test should be used some more—in a school near you.
In the past, well-marketed suggested that someone had sold the test to an entire school, typically by approaching the principal or some other financial authority. Rarely was a classroom teacher in that school given an opportunity to review the test or to try it out on a small group of students she or he knew well. Such a pilot study would have immediately revealed the obvious. The test could not give the same information about each child the teacher could give. It could never tell what each child actually knew.
But, the standardized test, with little #2 pencil bubbles that could race through a machine, was faster and cheaper than a teacher!
Then, districts began to purchase tests in bulk, netting much greater rewards for little more marketing time, energy, and cost. A contract to sell a test to a big district, paying on a per-test basis was like a gift from the gods. Imagine the difference between one neighborhood school and the LAUSD.
Enter the Feds. As a benefit to all who cling to the No Child Left Behind debacle, there is a test that has stood the test of time in the face of much research that shows it does not do what it claims to do.
The further removed the test maker is from the person who knows the child, less reliable the test will be.
The further removed from authentic literacy activity the test is, the less valid the test is.
Below are two articles on the subject of one test.
“DIBELS: The Perfect Literacy Test” was first distributed in 2005. Author Ken Goodman, researcher and a past-president of the International Reading Association, gave permission for it to be shared.
The article explains in a salient way how testing can go astray to the detriment of our school children.
This year, Priscilla Shannon Gutierrez of the New Mexico School for the Deaf observed the use of DIBELS on a population few of us contemplate as we think of standardized testing of phonics. In “DIBELS: Measuring the Sounds of Silence”, Gutierrez reveals a kind of child abuse that is systemically being inflicted on little children over and over. How on earth was a test like this ever permitted into this school? Into any school?
Must have been marketing.
Read on,
Wonder and
Weep.
La Vergne
DIBELS: The Perfect Literacy Test
Language Magazine
December 2005 V5:1 pp24-27
Ken Goodman
If Katrina came close to being the perfect storm-in the awful sense of the storm that had all the attributes to do the most harm to the lives of those whose destructive power and irresistible forces it touched, then there is a perfect literacy test sweeping through
American schools and doing the maximum amount of damage to the lives of those it touches.
American education has been overdosing on all kinds of tests in recent years as politicians and groups with their own agendas have put pressure on schools to show measurable results for the funding they receive. And many tests have become “high stakes tests” in that they are used as criteria for admissions, promotion, graduation and even wages.
But the perfect test is not like any of the traditional tests in popular use. It is not norm referenced like the Iowa or Stanford tests which are widely used to measure achievement. It is not like the barrage of high-stakes state criterion referenced tests promulgated to test reading and writing and judge whether pupils can pass from grade to grade or receive a high school diploma. It is not the
National Assessment of Educational Progress which has been used to paint dire pictures of whole states failing to produce proficient readers and writers.
No, the perfect Literacy test is the Dynamic Indicators of Basic Literacy Skills developed by a federally funded group at the University of Oregon. It is being widely mandated as part of the No
Child Left Behind plan each state must submit to the federal bureaucracy that controls NCLB funding. Its acronym DIBELS has, according to Education Week, “become a catchphrase in the schoolhouse and the statehouse.” (Manzo, Education Week, 9/28/2005)
What makes DIBELS the perfect literacy test is that it takes total control of the academic futures and school lives of the children it reaches from the first day they enter kindergarten when they are barely five years old. It keeps control of their literacy development and indeed their whole school experience for four years from kindergarten through third grade. And the more poorly the children respond to DIBELS the more they experience it. Norm referenced tests usually are not given until third grade and then only once a year. Diagnostic tests are usually used selectively with pupils to provide teachers with information on what strengths and weaknesses learners may have. DIBELS, once it gains a foothold, is administered a minimum of three times a year at the beginning, middle and end of each grade from kindergarten to third.
Within a few days of entering school five year-olds have their first opportunity to fail to achieve DIBELS arbitrary “bench marks.” Each month DIBELS is also used to “monitor progress” and those who are marked for “intensive instruction” are monitored weekly. Such tests have sometimes been called test-teach-test models. In that model a pre-test is given, then the content is taught and then a post-test measures gain.
DIBELS uses a test-test-test model because increasingly frequent testing is the fate of those who fail to achieve the bench marks of DIBELS. There are reports of children practicing for the DIBELS while they wait in line to use the toilets.
Unlike Katrina whose path and approach could be monitored permitting those who had the means to get out of her way to avoid her dangers, DIBELS has arrived in most of the schools it takes control of as an irresistible force with neither pupils or teachers having any opportunity to get out of the way. That’s because the federal ideologues who have the power to review state NCLB proposals have strongly “encouraged” the use of DIBELS and most states have obliged, some even mandating its use in all of the state’s K-3 classrooms. Every K-3 teacher in New Mexico gets a palm pilot programmed with DIBELS. The scores the testers enter go directly to Santa Fe and the computers at the University of Oregon.
Where DIBELS has been mandated state-wide the only escape from DIBELS is home-schooling which of course is not an option for most working parents. Some parents have down-loaded the whole test themselves and drilled their children at home so they will “bench-mark” in school and avoid the intensive interventions of DIBELS.
According to the DIBELS manual (available on-line at
http://dibels.uoregon.edu/ ) for the 2004-2005 school year, 8293 schools used the Dibels scoring service , across 2582 districts in 49 states and Canada, totaling over 1.7 million students (K-3). It’s likely that its reach has expanded considerably beyond that for the current school year. <>
So what is DIBELS that it should have such awesome powers? It is a package of sub-tests designed to be administered in 1 minute each. It’s basic premise is that it can reduce reading development to a series of tasks, each measurable in one minute. Each test has arbitrary benchmarks which get more difficult to achieve in successive grades.
The test authors claim that the sub-tests are “stepping stones” to reading proficiency and each prepares the child for the next test. That means that children who fail one test are failing in reading development according to the authors. And in fact children are being retained in kindergarten and first grade solely because they fail one sub-test in DIBELS. In fact, only a small number of states require children to attend kindergarten. So children entering school without kindergarten are already a year behind from the DIBELS perspective.
Testers who are most often not the children’s teachers are given minimal training and admonished not to deviate in anyway from the procedures or the wording of the tester’s manual. “You’ll be proud
of me” said one five year old to her teacher when she came back from being Dibeled… “I didn’t talk to those strangers”. Her scores were perfect zeroes.
DIBELS provides no time for thoughtful responses. It allows for only one speed--fast. Like a whirlwind DIBELS seizes young children and drives them into each task. Each test is administered with a stop watch in hand. Children are permitted three seconds for each response and the test is stopped at one minute or when the child is wrong on five items.
All scores are quantitative and the tester makes no judgments of the quality of the response, so in no sub-test is there any information about how well the child is understanding- and indeed
in only one test is there any meaningful text to be read.
Here are the names of the tests and what they actually test in the order that children would encounter them.
Letter Naming Fluency: The child is given a page with lines of mixed capital and lower case letters in a font that is not the most common one in early reading material. The score is the number of letters correctly named in one minute. If the child says a sound instead of a letter she is told “names not sounds”, but only once. Some five year olds respond with the name of a child whose name starts with the letter. No points for that.
Initial Sound Fluency: The child is shown a page with four pictures.
The tester says a word for each picture and then asks which picture starts with “buh”. The child must remember the names of the pictures and then abstract out the first sound. The picture may look like a bear but the tester called it a cub. That big yellow grasshopper was called an insect. Is that picture a frosted donut or a bagel with cheese on it? The score is the number of right initial sounds the child can say in 1 minute.
Phonemic Segmentation Fluency: The tester has a sheet with one syllable words. If the tester says “cat” the child must respond kuh- ah- tuh in a few seconds. One point for each correct sound produced in one minute. Mismatches between the dialect of the tester and the child certainly affect the score.
Nonsense Word Fluency: The child has a sheet with what are supposed to be two or three letter “make-believe” words. The
tester tells the child to either say the whole word or each sound. In either case the score is the number of sounds right in one minute. In this test children already reading are handicapped because many of the nonsense words are either possible spellings of real English words or actual words in English or Spanish. There are stories of teachers making nonsense bulletin boards so the children can practice reading nonsense.
Oral Reading Fluency. Starting in first grade the children are given a five paragraph essay on a topic written in first person. The score is the number of words read correctly in one minute. The children learn to skip any words they don’t know and say the words they know as fast as they can. The tester says any word the child stops at after a few seconds. Some children use that as a signal that they should wait for the tester to say the word before proceeding. And a minute goes by very rapidly.
Oral Retelling Fluency. Teachers complained that counting correct words didn’t show what the children understood. So the DIBELS folks added an oral retelling. The score is the number of words the
kids produce in one minute that are more or less on topic. No attention is paid to the quality of the retelling. Honest.
Word Use Fluency ; Starting in kindergarten, the tester says a word and tells the child to “Use the word”. The score is the number of words the child uses in using the words in one minute. It’s hard to see what this would have to do with reading since no reading or print is involved.
Notice that each test name includes the word fluency. How can one be a fluent word namer or sound sayer? Apparently fluency to the Dibelers means speed and accuracy.
There are many things wrong with DIBELS.
It turns reading into a set of abstract decontextualized tasks that can be measured in one minute. It makes little children race with a stop watch.
It values speed over thoughtful responses.
It takes over the curriculum leaving no time for science, social studies, writing, not to mention art music and play.
It ignores and even penalizes children for the knowledge and reading ability they may have already achieved.
Reading is ultimately the ability to make sense of print and no part of DIBELS tests that in any way. In DIBELS the whole is clearly the sum of the parts and comprehension will somehow emerge from the fragments being tested.
On top of that the sub-tests are poorly executed- the authors do badly what they say they are doing. Furthermore the testers must judge accuracy, mark a score sheet and watch a stop watch all at the same time. And, to be fair, testers must listen carefully to children who at this age often lack front teeth, have soft voices, and speak a range of dialects as well as languages other than English. Consistency in scoring is highly unlikely among so many testers and each tester is likely to be inconsistent.
And lets add that DIBELS encourages cheating. There is a thin line between practicing the “skills” that are tested and being drilled on the actual test items, all of which are on-line to be downloaded. With so much at stake why wouldn’t there be cheating?
In summary DIBELS, The Perfect Literacy Test, is a mixed bag of silly little tests. If it weren’t causing so much grief to children and teachers it would be laughable. It’s hard to believe that it could have passed the review of professional committees state laws require for adoption of texts and tests. And in fact it has not passed such reviews. There is strong evidence of coercion from those with the power to approve funding of state NCLB proposals and blatant conflicts of interest for those who profit from the test and also have the power to force its use. A congressional investigation is now underway into these conflicts of interest.
In training sessions for DIBELS, teachers are not permitted to raise questions and are made to feel that there is a scientific base to the test they lack the competence to understand. It is, after all, The Perfect Literacy Test.
Ken Goodman, Professor Emeritus
Language, Reading and Culture,
University of Arizona Kgoodman@u.arizona.edu
Short Bio:
Ken Goodman is a researcher and teacher educator in
language and literacy. He is past-president of the International
Reading Association and the National Conference of Research in Language and Literacy. His reading miscue research and model of the reading process have won a number of national awards. His books include On Reading, Phonics Phacts ( both Heineman) In Defense of Good Teachers (Stenhouse) and Saving Our Schools (RDR Books)
Krashen mailing list http://sdkrashen.com/mailman/listinfo/krashen_sdkrashen.com
*****
DIBELS: Measuring the Sounds of Silence
By Priscilla Shannon Gutierrez
New Mexico School for the Deaf
February 2007
Once again, the 5th grade student sat there frantically trying to get the sounds right. It seemed that, no matter how hard he tried, he could not get the words out of his mouth fast enough to get a passing score on the DIBELS test. * His teachers realized he was older than most students who take the DIBELS, but in spite of this they felt it was appropriate to use with him, given that his reading skills were still way below grade level. Besides, no other assessment had been approved for use within their Reading First grant. DIBELS was approved and so the teachers could breathe easier knowing they were in compliance.
The 5th grader’s low score targeted him as a high risk for failure. His parents were notified that he should not be allowed to move on to the next grade level because of the DIBELS results and because the intensive phonics instruction recommended as an intervention had clearly not produced the desired results. Why wasn’t the “science of phonics” working with this student?
The answer is that this 5th grader is deaf and therefore incapable of hearing lettersound relationships. This deaf child, along with many others, has repeatedly been tested with DIBELS and labeled a failure – all in the name of complying with Reading First. Incredible as it may seem, this is not an isolated example. Similar testing practices are under way in numerous states, including Florida, Vermont, Texas, New Mexico, Colorado, Michigan, and Alabama, in both mainstreamed and residential special education settings.
Now we know why. An investigation by the Inspector General of the U.S. Department of Education, released in September 2006, revealed conflicts of interest in the $1 billion-a-year Reading First program, whose “expert review panel” was stacked with a select group of publishers and consultants, including the creators of DIBELS.
Under the guise of promoting “research based” educational practices, DIBELS and Reading First have defined fluency as speed reading while equating equate phonics with reading ability. In fact, neither proposition is scientifically supported, and both put deaf children at a huge disadvantage, as compared with other students, because of DIBELS’s strong emphasis on these two skills. Moreover, because of increased pressure to utilize DIBELS as the sole measure of reading ability, along with increased pressure to teach to the test, deaf students are denied valid assessment measures and instructional approaches that could provide a more accurate picture of their reading skills or potential.
Fluency in reading means the ability to comprehend a text with a certain level of automaticity. Fluency does not equate with how fast students can identify a letter or a word. There are many instances where the contents of a story require a pause for emphasis or effect. Fluent readers recognize this as they encounter text and adjust their pace to match what is happening in the story. Teaching children that reading means identifying words rapidly because that’s what you will be tested on inhibits their learning of deeper levels of proficient reading skills. Trying to get deaf students to rapidly vocalize sounds they are incapable of hearing is a ridiculous waste of valuable instructional time, which they cannot afford to lose.
Granted, phonics skills are part of the tool kit that many fluent readers utilize to make sense of text they read. But phonics ability does not define reading ability. At its core, reading is comprehending whatever you encounter in print, however you do it. There are many deaf adults and children who are proficient readers in English who do not rely on phonics to understand text. Rather, they use their knowledge in American Sign Language (ASL) to make sense of written English. Proficient deaf readers focus on phrases rather than single words or letters because it often takes three to four English words to illustrate an action that requires only one sign in ASL. For example, the phrases “He got up” or “She skipped down the street” are each represented by a single ASL sign. In view of this linguistic reality, how can reading instruction that focuses on letters or words in isolation bring deaf children to higher levels of comprehension in reading? How can DIBELS be an accurate predictor of their reading potential?
DIBELS and Reading First operate on the premise that deaf students must discard their deafness and become “good little hearing children.” They expect these students to abandon the signed language they use to make sense of the world – and to make sense of English print – while stressing the one skill deaf children cannot even access. Because of their narrow definition of what constitutes reading, DIBELS and Reading First set these children up for failure before they even enter our school system.
Historically, deaf children represent one of the most disenfranchised and marginalized groups in our education system. The current “scientifically based” approaches that Reading First mandates in the name of accountability will not bring the deaf student to proficient literacy. Measuring the sounds of silence that represent the world of the deaf child is an exercise in futility that none can afford. Every minute of school is crucial for them. Indeed, forcing DIBELS and phonics instruction on deaf students who cannot access sound stops just shy of government-sponsored child abuse on a massive scale.
When will we end a policy that creates huge profits for a select few at the expense of our students? When will we have the courage to recognize the damage being done and do something about it? When will education return to real teaching and assessment for learning? Our children await our response.
* DIBELS is the acronym for Dynamic Indicators of Basic Early Literacy Skills.