The Evaluation of Appropriate
I used to be one of the expert evaluators that would give an opinion on whether a program was “appropriate” or “not appropriate.” I almost never agree to do these any longer, even though it can certainly help children get better programming–sometimes. I just didn’t enjoy the work. I’d rather be the person doing the programming and let someone else evaluate my work. I’ll Poogi more that way, too.
By law, schools are supposed to provide children with disabilities FAPE (Free Appropriate Public Education). But what does “appropriate” look like? Parents of children with disabilities and school districts often don’t agree on what “appropriate” means. There is a lot of legal opinion on this topic that I don’t intend to get into here. For now, the main point is that there is no agreed-upon standard to determine if a program is appropriate.
There are some generally accepted rules. For example, the school district is not required to provide the “best,” only what is appropriate. On the other hand, if a child is making no progress or very minimal progress, the program is clearly “not appropriate.” This is an adversarial process; the district and parents will often fight about who are the appropriate experts to settle the dispute.
When you have a subjective standard like this, it is impossible to prevent huge biases from entering into the decision. This is especially true when large sums of money are involved (e.g., if the results of the evaluation determine whether or not a student is outplaced to a private school). That’s why there is so much fighting over who is the appropriate expert. Their biases (and I think who is paying them) can have a huge impact on what their opinion might contain.
What would be an example of a huge bias? Well, there are lots of things that BCBAs do not agree on easily; PECS vs Sign Language, IISCA vs. Traditional Functional Analysis, or VB MAPP vs. PEAK. Sometimes the person on the other side is from a different profession, which makes the conflict even more difficult.
The key lesson for me is that these decisions are rarely decided based solely on data (even if the primary decision-makers are BCBAs). I’ve seen terrible programs survive an evaluation as appropriate. I’ve seen excellent programs deemed as not appropriate. It is not enough to master data analysis; we must learn to work in complex social environments, too.
There isn’t an easy answer if you are stuck in one of these conflicts. But I can offer at least one small, yet helpful tip. People might say the decision is based on science and data, but that is rarely the case. The reinforcers controlling the decision are complex and usually have very little to do with the child. As soon as you realize that, you can better work to advocate for the child in a way that truly is based on science and data.
How Do You Ask About Negative Side Effects?
Many cities have implemented behavior change programs in an attempt to get drivers to stop running red lights. Specifically, they put cameras at traffic lights and the driver gets mailed a ticket if they run a red light. These cameras are extremely effective at reducing the number of drivers who run red lights. Unfortunately, drivers do this by frequently stopping short and dramatically increasing the number of rear-end incidents.
Applied Behavior Analysis has a whole literature on how making one behavior change often impacts many other unplanned behavior changes. Sometimes, the unplanned behavior changes may be positive as in this example. However, I think that negative side effects happen in practice much more than is usually appreciated.
I have previously argued that we should both think ahead about possible negative side effects and make plans to prevent them. This is especially true with programs to improve staff behaviors. Planning is a good start, but we would also like a measurement. That’s pretty hard. How do you measure a negative side effect that you haven’t planned?
Well, of course, you have to spend time talking to staff. I strongly suspect how you ask about negative side effects matters a lot. If you ask staff, parents, or others, “How is everything going?” or “Are you doing all right?” The likely answer is “good” or “fine.” If you truly want to dig into the problems of negative side effects, it is necessary to be more direct. When asking, you need to assume that there are negatives still occurring, or else you substantially reduce the chances that you will be told about them. Use statements like:
“It looks like the program to reduce self-injury is going great, what problems have you been experiencing?”
“All programs appear to be going fantastic right now, what are your current 3 biggest problems?”
In other words, unless we are at a point where the child no longer requires intervention, there will still be big problems to address. Some of them may be negative side effects of programming; others may be challenges we have yet to address. Perhaps you don’t want to do this too much. We want to keep an overall positive environment and not having everyone looking for the negatives all the time. In my view, this is a subtle but important skill for BCBAs.
Just Do It
During the planning stages of working with children with autism, it is important to take time to think through the long-term implications of your program, look for potential negative side effects, and in general be very thoughtful before implementing something new.
But things change a bit when you are actually sitting with the child. Now, seconds matter. You don’t have time to hesitate and think through all your options. But new staff who generally know what to do, yet aren’t fluent with their skills, may hesitate out of fear that they might mess something up or be criticized for making an error.
When shaping a new skill, small errors are not likely to mess up a program, especially if you learn from them. If your prompt is off slightly, if you don’t reinforce at exactly the right time, or don’t say exactly the right thing, those mistakes can easily be corrected. But if people are afraid of even making small errors, training is sure to take MUCH longer than it should. Encourage people to “just do it.” Small errors can be fixed later.
Of course, supervisors to be clear on when staff should not “just do it,”–anything that would potentially create a safety issue, impact the dignity of the child, or hurt rapport in the relationship.
But as a general rule, when staff are in that awkward phase of having completed training and have demonstrated accuracy in implementing the skills, but are still not fluent or confident in their skills, “just do it” is usually great advice.