We typically want a simple answer to the question, “Is treatment effective?” Ideally, the answer is “Yes” or “No.” But people will accept a graph as the answer to that question. When a school district, attorney, parent, or insurance company wants information on the effectiveness of treatment, BCBAs are likely to respond (appropriately) with graphs of the relevant behavior changes. In most situations, these types of answers with the visual display will be readily acceptable:
Question: “Was the treatment for self-injury effective?”
Answer: “Here is the graph for self-injurious behavior collected over the previous six months showing a 93% reduction in self-injury from baseline levels.”
Question: “Was the teaching program to answer questions starting with “where” effective?
Answer: “Here is the graph showing baseline levels at 0% correct. After treatment, data demonstrate 100% correct on 3 consecutive sessions.”
Even assuming the data are completely accurate (which is a very questionable assumption in many environments), there are still a very large number of ways the data can be completely misleading.
- The treatment was temporarily effective but doesn’t last over time.
- The treatment was effective, but staff or parents are using interventions that make it look better than it is in reality.
- Data was collected in a way that while accurate, did not practically measure the skill.
- The treatment was effective but caused other negatives to occur.
There are other possible ways for data to be misleading, but an exhaustive list is beyond my scope for today. The key point is that having graphs of your behavior changes is essential, but it is not sufficient. We need to make sure that we have really made a socially significant difference. Sometimes the data on behavior changes will look great and the program is still a failure.