What are the Components of an Effective DI Implementation? with Tamara Bressi
Sep 22, 2024
Hello everyone and welcome back to the Direct Instruction podcast. My name is Dr. Zach Groshell, and today I am excited to be bringing you a glimpse into my classroom for an unscripted coaching session with Tamara Bressi. Tamara is an experienced Direct Instruction teacher, trainer and instructional coach. I had the honor of working with Tamara over the course of multiple training sessions to develop my implementation of Corrective Mathematics for a middle school intervention context. This episode is structured so that you first hear a recording of me teaching prior to receiving any training, followed by some feedback from Tamara. Then, we will listen to a “post-training” recording, where Tamara and I discuss some of the shifts I have made in implementing Direct Instruction with greater precision. This episode is brought to you by NIFDI, or the National Institute for Direct Instruction. Our guest today is one of the trainers for the Academy on Becoming an Effective Direct Instruction Trainer, which is an integral component of the National Direct Instruction Conference held in Eugene, OR. You can find more information about the conference by going to http://www.nifdi.org, and I encourage you to register for what is the most comprehensive DI training available anywhere.
If you love the DI podcast, please help me spread it around, and give it 5-star rating on the platform of your choosing.
How Has DI Impacted Reading Proficiency in Indigenous Communities? with Casey Sovo
Jun 16, 2024
Hello, and welcome back to the Direct Instruction podcast. My name is Dr. Zach Groshell, and today I am excited to be bringing you a fantastic episode about the role of Direct Instruction in teaching kids to read in indigenous communities across America. I will be interviewing Casey Sovo, a member of the Comanche Nation, a 20-year educator, and a Bureau of Indian Education program administrator. Casey will speak to the incredible improvements in performance of schools that he supported when DI was implemented with fidelity, as well as the obstacles or barriers that he had to address when implementing DI. We will also talk about the times when DI was discontinued in the schools he supported, the dramatic dip in performance that followed, and the return to positive growth when schools decided to restart with DI. Overall, I was inspired by Casey’s passion to eliminate the achievement gap for indigenous students through a data-informed, all-hands-on-deck approach to teaching reading, and it was an honor to have him on the Direct Instruction podcast.
This episode is brought to you by NIFDI, or the The National Institute for Direct Instruction. Registration is open for the National Direct Instruction Conference in Eugene, Oregon this summer, where, in 2024 when this episode was published, Casey Sovo will be the keynote speaker! I’ve provided the links to register in the show notes, and I encourage you to sign up for what is the most comprehensive DI training available anywhere.
If you love the DI podcast, please help me spread it around, and give it 5-star rating on the platform of your choosing.
What is Direct Instruction Mathematics? with Marcy Stein and Bernadette Kelly
May 11, 2024
Welcome back to the Direct Instruction podcast, my name Dr. Zach Groshell, and today we are going to be talking all about Direct Instruction Mathematics.
What you just listened to comes from a 1966 video recording of Zig Engelmann, which I have embedded in the show notes. If you’re not driving while you’re listening to this, I highly recommend that you pause this podcast right now and watch the video in full. It is a perfect showcase for how rigorous and well-structured instruction can empower young learners to grasp even the most complex mathematical concepts.
But what is Direct Instruction mathematics, and how does it differ from more conventional ways of teaching of math? To answer that question, I thought I’d interview some of the experts of DI math about what goes into the design and delivery of this highly effective teaching system. Dr. Marcy Stein is the lead author of Direct Instruction Mathematics and Professor Emeritus of The University of Washington Tacoma. Also joining us is Dr. Bernadette Kelly who co-authored the DI programs, Connecting Math Concepts and Essentials for Algebra. This is an incredible episode that I hope will prompt more schools to consider how top-notch materials designed with the DI approach can break the cycle of stagnant achievement in mathematics. This episode is brought to you by NIFDI, or the The National Institute for Direct Instruction. Registration is open for the National Direct Instruction Conference in Eugene, Oregon this summer. Whether you are new to Direct Instruction, or are looking for training on Administrative Leadership, Coaching, and Becoming an Effective DI Trainer, this is the conference for you. I’ve provided the links to register in the show notes, and I encourage you to sign up for what is the most comprehensive DI training available anywhere.
If you love the DI podcast, please help me spread it around, and give it 5-star rating wherever you get your podcasts!
How Effective is Direct Instruction? with Jean Stockard
Apr 14, 2024
Hello, and welcome back to the Direct Instruction podcast. My name is Dr. Zach Groshell, and today we have a fantastic episode about a half-century of research on the effectiveness of Direct Instruction with Jean Stockard. Dr. Stockard is a distinguished quantitative sociologist, and the author of the book, All Students Can Succeed.
One of the problems we face in education is that everybody claims that their program is “research-based”. These days, most curriculum publishing companies have to show some proof that their program works if they are to make states’ lists of approved programs. The thing is, the determination that the program “works” is often made on the basis of the results of a single study. When you go over to the publisher’s website and click past all the testimonials and flashy graphics, you find that the single study showed small, mixed, and statistically insignificant effects, in favor of their program. Because these studies often include intensive professional development workshops and additional resources for the teachers who received the new program, it is questionable that students’ learning gains can be attributed only to the use of the program, and not to the teacher training that was employed or the extra resources they were provided.
So, I wanted to dig in to the research base of Direct Instruction to see how it compares, and Jean Stockard seemed like the researcher I needed to talk to. She and her colleagues are the authors of the 2016 Meta-Analysis on Direct Instruction Curricula, which contained 328 studies involving 413 study designs and almost 4,000 effects. What I found interesting was that, while the authors knew before they started the project that a lot of research had accumulated over the half-century, they were stunned at the task that lay before them due to the sheer magnitude of research on DI. And when they went to crunch the numbers, the results were so strong and so consistent that they checked and re-checked their findings to make sure they were correct. No matter how they spliced it, they simply could not find any situation in which DI did not work. This episode is brought to you by NIFDI, or the The National Institute for Direct Instruction. Registration is open for the National Direct Instruction Conference in Eugene, Oregon this summer. Whether you are new to Direct Instruction, or are looking for training on Administrative Leadership, Coaching, and Becoming an Effective DI Trainer, this is the conference for you. I’ve provided the links to register in the show notes, and I encourage you to sign up for what is the most comprehensive DI training available anywhere.
If you love the DI podcast, please help me spread it around, and give it 5-star rating on the platform of your choosing.
What was Project Follow Through? with Linda Carnine, Susie Andrist, and Jerry Silbert
Mar 10, 2024
Welcome back Zig fans, my name is Dr. Zach Groshell. I am a teacher, a parent, and the host of this show, the Direct Instruction podcast.
The DI podcast is brought to you by NIFDI, which stands for the National Institute for Direct Instruction, and today we are going to be talking about Project Follow Through. I will be interviewing a panel consisting of Linda Carnine, Susie Andrist, and Jerry Silbert, 3 incredible people who were there, on the ground, during Project Follow Through.
But what was Project Follow Through?
To answer that question, I’ve asked Jean Stockard, an empirical social scientist, if I could read you an excerpt of her writing on this topic. Jean will be coming on the Direct Instruction Podcast next episode to talk about the effectiveness of Direct Instruction, based on her synthesis of over 5 decades of research. It is a fascinating interview that should not be missed.
So, let’s begin, shall we?
Project Follow Through was probably the largest study of educational interventions that was ever conducted, either in the United States or elsewhere. While it is now largely forgotten, at the time it embodied many of the hopes and ideals of those who wanted a more just and equitable society and believed that education had an important role to play in those endeavors. Follow Through emerged from President Lyndon Johnson’s “War on Poverty,” announced in his 1964 State of the Union address in Congress.
Project Follow Through was originally conceived as a service project that would extend the types of support provided in Head Start to students in the primary grades. When it became clear that the cost of such an endeavor would be very large, the purpose was changed to determining the most effective educational interventions for students from low-income households. The Office of Education developed a research design, called “planned variation.” In contrast to a carefully controlled laboratory setting, this design would involve the implementation of educational innovations in real-life settings, but in the very best way possible. Sponsors of these innovations were required to “provide the community with a well-defined, theoretically consistent and coherent approach that could be adapted to local conditions,” and implement a “total program, rather than a small fragment, with a resulting possibility for a major impact on the child’s life.” Participating districts received supplemental funding of $750 for each Follow Through student to support additional costs for aides, materials, and staff travel. In addition, all children were provided health and dental care as well as nutritious food through meal programs. In total, Follow Through served over 10,000 students from low-income households in 180 communities at a cost, at that time, of 500 million dollars, a research expenditure that will likely never again be matched.
Eighteen educational programs were initially involved in Follow Through. The programs represented the most popular educational approaches at the time, but varied in theoretical orientations and basic assumptions about how children learn. All, except Direct Instruction, had been developed by academics strongly influenced by educational theorists such as Jean Piaget or John Dewey. Importantly, each of the models, or their derivatives, is still prominent within education today in the various developmental, constructivist, inquiry based, and similar approaches.
Accounts of Follow Through usually divide the programs into three general groups based on their underlying assumptions. Some, termed “affective skills models,” assumed that socioemotional development was most important. Schools and classrooms that promoted children’s self-esteem and positive peer interactions and built on children’s own interests were thought to be most successful.
Other models were termed “cognitive and conceptual skills models.” These approaches were based on cognitive developmental theory and the work of Piaget, assuming that children from low-income households were behind their peers because they lacked sufficient cognitive experiences. These models incorporated approaches such as self-directed learning, a language-experience approach similar to whole language, and an emphasis on students’ learning styles.
The models in the third group were termed “basic skills models.” These programs assumed that behaviors are learned and that children from low-income households lagged behind because they had not been adequately taught. As would be expected, the evaluators placed Direct Instruction in this group. In addition to having very different views of key influences on student learning, the models differed in the extent to which they were structured, or teacher-led. The DI and Behavior Analysis models were the most structured, while the Bank Street and Open Education models were least structured.
Before starting to participate in Follow Through, all students were administered a nationally normed instrument that examined basic skills in language, reading and math. Other measures were used at the end of the school year. Affective skills were measured with an “Intellectual Achievement Responsibility Scale (IARS), which tapped students’ locus of control, thought to be a key element in students’ self-concept and self-efficacy. Cognitive and conceptual skills were measured with subscales of the nationally normed Metropolitan Achievement Test (MAT), which focused on conceptual elements of mathematics and reading, and “Raven’s Coloured Progressive Matrices,” a cognitive test thought to measure analytic ability and cognitive reasoning. Basic skills were measured with subscales of the MAT that focused on vocabulary, math computations, and spelling.
Recognizing that it can take time and care to fully implement new programs, the Office of Education specified that no data would be published until the programs had been in place for eight years. They believed that this would allow substantial time for schools, teachers, and students to demonstrate the results of their best efforts. 27 This extensive time frame, coupled with the wide range of assessment data, let the researchers compare the impact of the various programs in several ways: Did results differ across measures? Did results vary with the amount of exposure to a program? Did cohorts who entered the program in later years, when their programs were more established, have better results than those who entered at the start? Did results vary with different methods of analysis, as for example comparing to the control schools or to national norms? And so forth.
Abt Associates were responsible for analyzing the data, and their official, formal evaluation was released in 1977. In almost all respects, the analysis appears to have been very carefully conducted. The results were clear-cut and strong. Students from the DI sites significantly outperformed students in the comparison schools in all three of the areas that were measured: basic skills, cognitive skills, and affective measures. No other program had positive results in all three areas, nor did any other program have as many positive results as DI. Perhaps most striking was the lack of association between the stated aims of the programs and the outcomes. Programs designed to promote cognitive development had no significantly positive results and two of these programs) had large numbers of negative results. Similarly, none of the programs designed to promote affective skills had any positive significant results on the affective measures, but substantial numbers of negative significant results. The results were the same across all of the measures that were used and with different types of comparisons and analysis. This pattern of strong results in favor of DI held when results were examined separately by race-ethnicity, geographical region, years of exposure to the program, and initial test scores.
Even though the first official results of Follow Through were not published until 1977, preliminary results were available to sponsors by 1974. 32 These reports made it clear that DI and, to a lesser extent, the Kansas Behavioral Analysis model, were the only approaches that were successful. As one could expect, these findings and the prospect of them becoming widely known were deeply disturbing to the sponsors of other programs. The other programs reflected deep seated beliefs within the educational establishment, such as the importance of developmental and cognitive factors in promoting student achievement. Any indication that they were not effective could represent a profound challenge to their legitimacy. As one observer put it, the preliminary results from Follow Through were a “horrifying surprise” to these sponsors.
At the same time, most people affiliated with these sponsors were established figures in education with strong ties to foundations and federal funding agencies. They used their power and connections to counter the Abt report and eventually prevent the findings from having any influence on educational policy. With funding from the Ford Foundation, a panel of four education professors published a critique of Follow Through’s evaluation. They suggested that the outcome measures were unfairly selected and inappropriate, even though all sponsors had approved their use. They also claimed that the primary statistical technique (analysis of covariance) was “controversial,” a notion that would seem odd to most statisticians today as well as at that time. Most important, they challenged the central, original, stated purpose of the project – finding “which model works best” – claiming that it was inappropriate. Instead, they argued, the evaluation should have addressed questions such as “what makes the models work” or “how can one make the models work better.”
The final, official statement on the results of Follow Through simply reported on the programs as an aggregate, with no details about the results from individual sponsors. In other words, results from all of the programs were grouped together. Because only one of the nine programs, DI, had consistently positive results, its success was disguised within the combined findings. Thus, the official statement from the federal government was that Follow Through had failed, neglecting to mention that one program had succeeded.
In subsequent years the programs that were found to be ineffective in Follow Through have continued to receive substantial federal funding and there has been no official acknowledgement of differences in their performance. The federal government continues to spend extraordinary amounts of money to revisit the original question posed by Follow Through, trying to determine what are the most effective educational interventions, apparently with no recognition of the results obtained in that project.
Now, more than 50 years after the start of Project Follow Through, I’m excited to bring to you an interview with three of the pioneers of Direct Instruction, Linda Carnine, Susie Andrist, and Jerry Silbert. Let’s go over to them now.
What is Direct Instruction? with Bryan Wickman and Kurt Engelmann
Feb 04, 2024
Hello everyone, My name is Dr. Zach Groshell. I am an educator, a parent, and the host of this show, the Direct Instruction Podcast.
As an advocate for direct and explicit forms of instruction, I wanted to know more about Uppercase DI, and that’s what this podcast is all about. Over the next several episodes, we are going to be talking to the teachers, the implementers, the designers, and the proponents of Direct Instruction. We will learn about Project Follow Through, the most extensive research study in the history of education and explore over 50 years of research on the effectiveness of this unique approach. To kick this show off, it makes sense to go back to the very beginning, back to the creator of Direct Instruction, Siegfried “Zig” Engelmann.
In the second half of this episode, we will hear from one of the twins featured in the video above, Kurt Engelmann. Dr. Engelmann will share personal stories about his father and comment on the past and future directions for Direct Instruction from his perspective as the President of NIFDI. Kurt also mentions the Gering School District in Nebraska. Check out the film below, Closing the Performance Gap: The Gering Story. More videos like this can be found at https://www.nifdi.org/videos/nifdi-schools.html.
But before we go over to Kurt, we will hear from Bryan Wickman, who is the Outreach Director for NIFDI, and who has been instrumental in getting this podcast off the ground. Bryan has been involved in Direct Instruction for over 40 years, literally starting as a shipping clerk and receptionist at Engelmann-Becker back in 1978. Bryan will walk us through what is Direct Instruction, and how it came to be.
– Zach Groshell
Further readings from Kurt Engelmann
Bereiter, Carl, and Engelmann, Siegfried. (1966). Teaching Disadvantaged Children in the Preschool. Engelwood Cliffs, NJ: Prentice-Hall, Inc.
Carnine, Douglas and Kame’enui, Edward J., editors. (1992). Higher Order Thinking: Designing Curriculum for Mainstreamed Students. Austin, Texas: Pro-Ed.
Engelmann, Kurt E. (2024). Direct Instruction: A Practitioner’s Handbook. London, the United Kingdom: John Catt Educational from Hodder Education.
Engelmann, Siegfried. (1969). Preventing Failure in the Primary Grades. Chicago: Science Research Associates.
Engelmann, Siegfried. (2007). Teaching Needy Kids in our Backward System: 42 Years of Trying. Eugene, Oregon: ADI Press.
Engelmann, Siegfried, and Carnine, Douglas. (1991). Theory of instruction: Principles and applications (Rev. Ed.). Eugene, OR: ADI Press. (Originally published, 1982, New York: Irvington Publishing, Inc.)