Evaluating Problems Under Realistic Conditions
Of course, our long-term goal for our students is success under realistic conditions, not just success during therapy. But evaluating success during realistic conditions too soon is likely to be extremely misleading.
Say Fred, a young boy with autism, loves pizza. But he has a dairy allergy, so he is not allowed to eat pizza. The school serves pizza for lunch every Friday. When the Fred sees all the other kids eating pizza and he can’t have any, he has severe tantrums, self-injurious behaviors, and dangerous aggression. There might be a wide variety of solutions to this type of problem. For example, maybe he could have dairy-free pizza every Friday, maybe he joins a lunch bunch on Fridays with kids who aren’t eating pizza, or maybe he has a special lunch with a favorite teacher. Through a combination of these 3 approaches the team eliminates this problem.
Fred’s classroom has art once per week. Due to fine motor issues, the occupational therapist is present during art class to help with his skills. She notices that he wants every project to be green, and problem behavior is extremely likely if not allowed to do so. So, she implements a successful program, “First, you use 2 other colors, then, you can use green,” which data show to be highly successful.
In music, Fred loves to play the drums, which the music teacher doesn’t mind, and allows him to do it for several minutes each class (if she didn’t it would cause a severe problem behavior). But she does mind when he runs into class in the middle of her other classes to play the drums. The team works together to successfully eliminate this problem by keeping the music room door closed. Fred just walks past the music room with his class if the door isn’t open.
After six months of intervention, the data on Fred’s problem behaviors are amazing. They are reduced by 95%. But, there still are some dangerous episodes from time to time. What happened on those days? Well…
- We were walking to lunch bunch and he heard on the loudspeaker that it was pizza day.
- The occupational therapist was sick and the art teacher didn’t remind him he had to use 2 other colors before using green.
- A child went to the bathroom and left the music room door open.
The above examples show how misleading data can be. We want to see practical data of how a plan is working in the “real world,” but obviously, these types of plans are not likely to make a long-term significant impact on the child’s life. They might solve an immediate problem, but, of course, they require constant attention or the problem behavior will come back. Generally, it’s not practical to do this over a whole lifetime.
Now, to be clear, I don’t think there is anything wrong with those types of interventions. They allow us to maintain the safety and dignity of the individual child. But just don’t confuse them with interventions that are likely to make a long-term significant difference in the lives of children with severe problem behaviors. The only thing that has a chance to do that is teaching skills, which can take some time.
So, sure–While we are carefully building the skills needed to be successful in therapy, make whatever modifications are necessary and practical to maintain the safety and dignity of the child when not in therapy. Just be careful not to confuse success under those modified conditions with the likelihood of success after therapy ends.
Practical vs. Meaningful Plus Electronic
A critical component of programs based on the principles of applied behavior analysis is the commitment to data-based decision making. Part of this commitment involves deciding what data we should collect. I teach two important criteria for making that decision:
Data must be meaningful.
The data must truly capture how well the client is doing and help guide programming.
Data must be practical.
Teachers and parents should be able to collect the data.
Unfortunately, balancing these criteria can lead to conflict. I have frequently seen schools and even programs designed by BCBAs collect superficial types of data that aren’t very useful in helping to make decisions. Why is this?
It is quite possible to design a data collection system that might work in a scientific laboratory with 2-way mirrors and professionals collecting data on sophisticated computer programs, rewinding videos as needed to get all the relevant data. That doesn’t mean it can be done in a classroom or in a home setting with multiple siblings present.
In practice, when conflict comes between “practical” and “meaningful,” the winner will almost always be practical. After all, not having data is going to get you in major trouble. But rarely do people dig into the details and evaluate how meaningful or useful the data collected are in making decisions. If you do, you will find that in many cases, poorly thought out data leads to misleading conclusions.
Currently, though many of us could use some Poogi in this area, I fear this problem is instead getting worse. That’s because of a relatively new third criterion:
We want to record the data electronically.
In addition to creating practical and meaningful data collection systems, you have to make sure this data can be collected using an app. Certainly, collecting data electronically has many advantages and is extremely useful in many situations. But this technology is very unlikely to lead to more focus on how useful the data collected are for the child’s program. I see many programs twisting what they want to do so that it fits into the data collection system that has been purchased.
It is only a matter of time before our paper data sheets are as common as typewriters. In general, that’s a good thing. The improvements in efficiency and effectiveness from these systems are apparent. Just be careful to keep in mind the importance of collecting meaningful data as well as creating a practical data collection system. The goal is not to make the child’s programming fit into the technology. The goal is for the technology to assist us in making the child’s program better. Too often it isn’t working that way.
Why Measuring Successful Treatment of Problem Behavior Is So Difficult
In 2016, Behavior Analysis and Practice published an excellent article on the measurement of problem behavior called “A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior.” Blanc, Raetz, Sellers, and Carr present a model that behavior analysts can use to decide on the appropriate measurement procedure during the assessment and treatment of problem behaviors. The model captures many of the practical problems that can occur when selecting a measurement procedure, and is a useful contribution to the field. The article goes through all the basics on how to measure whether problem behavior is happening and how much it is happening. That’s obviously essential data to have, and the authors do an excellent job of going through a variety of problems that can occur when selecting a measurement procedure.
When I read this a couple of years ago, the first thing I thought about is how useful this model might be for training new behavior analysts. But second, I thought about how much more is needed to be successful in any practical situation. A good measurement procedure is necessary, but not nearly sufficient for practitioners. Obviously, including everything needed for a successful outcome would go way beyond the scope of this brief article. I’m not talking about successfully treating problem behaviors; just measuring the effects of treatment is a difficult undertaking.
So, what else is needed to measure successful treatment of problem behaviors besides knowing if the problem behavior is happening or not? In my view, at least five other measures are needed:
What else is he or she doing? –The Sit Down and Be Quiet Problem
Often, one of the primary outcomes that people want in a program based on the principles of applied behavior analysis is the absence of problem behaviors. Of course, that can be an important outcome, but it certainly matters what the person is doing instead of engaging in problem behaviors. Everyone is happy when significant self-injurious behavior or aggression is eliminated, but if the person is sitting around doing nothing much of interest, it likely isn’t a very valuable outcome. It is also essential that the person is doing something else that is beneficial to their well-being. Are they engaging in valuable communication, social skills, academic, or self-help skills? If not, it is hard to argue that much value was accomplished from the treatment.
What are the staff doing? –The Child Effects Problem
In research, what the staff or parents do during the treatment is very carefully monitored, but this rarely happens in practice in schools or homes. In the real world, staff usually have to make some “judgement calls.” This often leads to staff doing things to prevent problem behaviors that may or may not be in the child’s long-term best interest. Often, the child shapes the adult to avoid things they don’t want to do. The literature refers to this problem as child effects. If you don’t know about these things the staff are doing during treatment, maintenance and generalization of treatment effects is extremely unlikely. Therefore, it is absolutely essential to have some measurement of how well the treatment is being implemented. Not just for quality control, but to help with generalization and maintenance of the treatment effects.
Will this last over time? The Maintenance Problem.
A very common problem in the treatment of problem behavior is that at first, things work great, but over time, the behavior doesn’t last in the long run. From a measurement perspective, the problem comes when people say things like, “Since we haven’t seen any problem behavior in six months, we don’t need to measure problem behaviors anymore.” In about 99% of cases, people gain the weight back after the diet. We don’t really have data on how often the problem behavior comes back after successful treatment, but I can guarantee you it is a lot. Behavior analysts need to incorporate a maintenance plan with good data collection procedures in their programs.
Will this work in the real world? –The Natural Contingency Problem
There are many potential ways to reduce a problem behavior. Often, behavior plans are implemented over very long periods of time. That is appropriate and necessary most of the time. But eventually, parents and teachers want to live life without having to worry about managing the behavior all the time. If we want to achieve long-term maintenance, the child has to engage in an alternative behavior for a real-world reason that will meet a natural contingency. In other words, the appropriate alternative behaviors will be reinforced even if no one plans it out in advance. Eventually, we need to measure how the real world responds to the child’s alternative behaviors.
Are there hidden reasons treatment will fail? –The Social Validity Problem.
Not everything can be measured with objective data. For example, the teacher doesn’t agree with the treatment procedures; Grandma won’t follow the protocol; it isn’t practical at church; the paraprofessional hates the plan and does her own thing when no one is around. It is essential that we get the honest subjective judgements of all the people involved in the treatment. All the objective data might be showing great results, but if the important people in the child’s life still have significant concerns, it is only a matter of time before those issues start to show up in the objective data too.
In conclusion, the article “A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior” is a great place to start on measuring treatment for problem behavior. Just recognize that there is a lot more needed to be successful in practical settings.