.
Reading and Math Interventionist
AIMSweb Assessment Issues in an Urban / Suburban School District in Northeastern Kansas.
Any time you start a brand new assessment technique, there is going to be “growing pains”. Within the district exactly where I teach we had our share, but by the finish on the second AIMSweb assessment session (Winter, 2012), we felt like we had a quite great manage on it for the next session (Spring, 2012). Following, I'll discuss a number of the issues we had and also the options - or attempted options - we implemented.
First of all let me describe our scenario. We had a district-wide testing group. This was the first time our district had performed this. Of course this was also the initial time our district had genuinely implemented an assessment method. We had not employed a universal screener district-wide prior to. Previously, each building tested its personal students, usually utilizing a formative or diagnostic test. Our team consisted of our building-level interventionists. Our instructional coaches played a role, but for by far the most element didn't administer any assessments. They helped with organizing materials, served as “runners” to get kids to and from the testing location, helped make the schedule, etc. It really should be noted that by “helped” I mean they generally did it all by themselves.
Our testing group received coaching in how to administer and score the AIMSweb measures. The training was more than the course of two to 3 sessions. We fundamentally followed the AIMSweb Instruction Workbook, used the video examples to practice scoring, received data from presenters in individual and by way of webcam. We didn't get coaching from certified AIMSweb trainers. And, I feel this made a distinction. In my opinion it would happen to be valuable to have coaching from someone who had truly administered the assessment measures. We had some pretty specific concerns that could not be answered.
Our 1st session (Fall, 2011), went pretty properly considering we knew next to nothing at all going in. Our key challenge was in scoring. We didn't agree on what to perform in regards to the entire “does the answer need to be written inside the blank or not” around the MCAP (Math Concepts & Application) and MComp (Math Computation) measures. The instructions are quite explicit. According to AIMSweb, if the answer isn't in the blank, it is marked incorrect. However, there is a grey location. Within the standardized instructions for the students on the MCAP it specifically says to write the answer in the blank. Inside the standardized instructions for the students around the MComp is doesn't say to write the answer within the blank. Unfortunately, what this meant for us was that some scorers counted it wrong and some did not. So, we did not have consistent scoring the first time around.
The lesson we learned (I hope) is that you need to have those types of difficulties decided ahead of you even administer the test. We talked about what it meant if the student did not write the answer in the blank. We decided it meant they could not follow instructions, not that they could or could not compute the problem correctly. We asked ourselves, “What are we trying to determine with the test?” We decided we were not trying to determine whether or not a student can follow directions. We decided that it didn't matter if they put it within the blank or not, we just needed to all be scoring the same way. So, we eventually decided not to count it wrong as long as the answer was within the box somewhere and correct.
Another problem we had was that there was no “team leader” for our testing group. We had somebody we could call, but not anyone on site. In hindsight it would happen to be helpful to have a “go to” particular person assigned or appointed to the group. This could be one on the interventionists, or someone who doesn't do any testing. This would have saved us quite a bit of time when we had to try and figure out how we were going to score the math. That person could have produced an executive decision or called someone to find out. Then there would have been no disagreement about what to complete in a particular situation.
Another major problem we still have is what to complete about the data in terms of getting it out to the teachers and explaining what it means. We did eventually print out parent report letters and talk regarding the results at conferences. We found what we definitely need is for the teachers to receive some AIMSweb training. We need to know tips on how to read the data, interpret or analyze the data, and learn ways to talk to parents about the data. Some people aren't familiar with percentile rank, norms, standardization, and so forth.
Progress monitoring is another area that hasn't been perfectly implemented. Some schools progress monitor once per week, some once every two weeks, some barely once a month. There is a lot of information and facts to consider when progress monitoring. Some of the more important pieces of info are: How often do you progress monitor? Really should you progress monitor or strategically monitor a particular student? When progress monitoring is it definitely necessary to “drill down” or “test backwards” until you find the level at which to monitor the student? (That, by the way, takes a long time.) How do you set the goals for the student? What formula do you use? What do you do if the student reaches his/her goal? Are they automatically dismissed from intervention? What if they aren't on a trajectory that shows they will meet their goal? Do they automatically go to tier 3 interventions? How many data points are necessary to make a decision about a student?
Hopefully you are going to be able to have a number of these concerns answered ahead of you begin your district-wide assessment method. It will save you so much time and effort and you will be able to focus on what matters: what to perform with the students who are at-risk according to the screener.
Find out more info about AIMSweb Login