Applied Behavior Analysis (ABA) has essentially won the argument over the need to have data to guide treatment decisions. Now, pretty much everyone accepts that data will guide clinical decision-making.
The problem is that BCBAs often use data that are not very useful for clinical decision-making. One example I have frequently talked about is problem behavior data that are collected across an entire school day. Often, these types of graphs will show wide variability in the data. But, in most cases, we don’t have any indication as to why since many of the variables are uncontrolled–Today was a substitute teacher; a peer called the student a mean name on the playground; the math assignment was more difficult than usual, etc. Often it is hard to tell what causes what.
The combination of these two factors causes a conflict that leads to bad decisions. Specifically, we have taught everyone to expect to see the data. We want them to look at the data. When there is a meeting, everyone wants to see the graph of how Johnny is doing with his problem behaviors. The pressure of showing the graphs at meetings leads to some bad decision-making. BCBAs tend to do things like teach staff to avoid all difficult situations to make the data look better. This is unlikely to lead to success in the long run.
Now, temporarily avoiding difficult situations that the client isn’t ready to handle is often a smart strategy. The problem is that some BCBAs use the reduction in problem behaviors as evidence of progress. If the client has problem behaviors during math, making the math easier or temporarily avoiding math all together may be appropriate in some circumstances. But we can’t claim that the resulting reduction in problem behavior represents “progress.”
What’s the solution? Simple. We need to teach people what types of data matter. In the example above, don’t show data on random uncontrolled incidents. Instead, show data in specific contexts where we are teaching the child to handle a difficult challenge. Now we can fairly evaluate as to whether an intervention is working on not working. We then work through each challenge until the child is successful in all contexts.
Of course, that is not as easy as it sounds. It takes time to explain this to administrators, teachers, parents, and others. But once people understand this and we are using the right data to make decisions real progress is much more likely.