Just because it looks like a duck…
…swims like a duck, and quacks like a duck doesn’t mean it is a duck.
Sometimes, BCBAs think we are talking in plain English, but we really aren’t. Many people, including many BCBAs are confused about what we mean when we say “those behaviors are the same” and “those behaviors are different.” This common misunderstanding can really inhibit communication, and has even led to serious criticisms that don’t make any sense.
I once heard a famous critic of behavior analysis argue that there is a huge difference between a child who looks at an adult in order to gain a reward and a child who looks at an adult out of love. Of course, no behavior analyst would disagree. That’s not a criticism, that’s common sense. The critic clearly didn’t understand how behavior analysts define when behaviors are the same and when they are different.
Let’s explain this concept using Plain English and avoiding technical jargon. BCBAs consider two behaviors the same if they occur for the same reason (see note below). Even if the two behaviors look very different, if they occur for the same reason, they are the same behavior. For example, a child may engage in many different problem behaviors (e.g., self-injury, aggression, tantrums, fall to the floor, screaming), but in most cases they usually occur for the similar reasons, and are therefore not really different behaviors.
On the other hand, sometimes behaviors may look almost identical. But if they occur for different reasons, they are different behaviors. For example, if you are reading a book for enjoyment, that is a different behavior than reading a textbook because there is an exam tomorrow morning.
Therefore, in behavior analysis, the duck test doesn’t work. Often you can’t tell just by observing why a behavior is occurring. If the behavior isn’t occurring for reasons that are natural, it likely won’t last in the long run.
We need to be very careful when we say things like he is doing so well sharing his toys, playing with friends, participating in class, eating his vegetables, or refraining from problem behaviors. Those things might look great. But if the child is doing it for contrived reasons, that’s the wrong behavior and not likely to last.
So, the critic may have a point. He is wrong on the behavior analysis because he doesn’t understand the jargon. But he may be correct that in practice, progress may be exaggerated because we often don’t report on the reason why those new positive behaviors are occurring.
NOTE: For the picky BCBAs out there- I explained it this way in order to avoid explaining response classes, topography vs function, and other difficult to understand concepts. That’s way beyond the score of this post.
Prompt-Level Data Is Not in the White Book
Let’s say we are teaching a learner to wash his hands. Probably, we’ll start by creating a list of the steps the child will need to learn to wash his hands. The first step in this sequence might be to turn on the water, and so on. Often, practitioners will measure how well the child is doing by measuring the “prompt level.” For example, a child might require a physical prompt to turn on the water. In this case, the therapist or parent might have to hold the child’s hand to show them how to turn on the water. After some teaching, the child might do it with just a gesture towards the faucet or a verbal reminder to turn on the water. Finally, we hope that the child turns on the water without any prompts. This allows the practitioner to determine if the child is improving or if he or she needs less intrusive prompts over time.
I find this to be an extremely common measurement procedure in practice, but when I look through the new edition of the White Book, this procedure isn’t in there. This book is generally considered the flagship textbook in Applied Behavior Analysis. Why does our flagship textbook not cover such a commonly used procedure? I suspect it is likely because it is a very poor way to measure progress.
I don’t know how the use of prompt-level data became so frequently used in practice. I used it myself for many years. But later, I realized there were much, much better ways to measure progress.
Prompt-level data is problematic for the following reasons:
- A prompt is not the child’s behavior, but the teacher’s behavior. The teacher is the person who determines what prompt to use and when to use it. This data often varies dramatically based on who is doing the prompting.
- Prompt-level data often interferes with good teaching by requiring the therapist to record a lot of data instead of focusing on the learner.
- Effective teaching requires the therapist to completely focus on the learner and to provide the reinforcement at just the right time. That feedback is the primary mechanism required for teaching. This procedure makes the teacher focus on the prompt rather than the reinforcement.
- Prompt-level data can often exaggerate progress. It is relatively easy to show a reduction in the intensity of prompts, but it can be difficult to obtain true independence.
What do to instead? Simple. Just measure each step in the list of skills and record if the child did it independently or did not do it independently, regardless of what the teacher did to help. This produces much more reliable data, and makes it easier for the teacher to focus on teaching.
One Reason Why Assessment Results May Be Misleading
When we go to the doctor for a medical test, often we want to know whether we “have” or “do not have” a particular medical condition. For example, the patient wants to know if he has cancer, COVID-19, or strep throat. The expected answer is either “yes” or “no.” Sure, some medical tests are more accurate than others, and any test might be wrong some of the time. But, still, they seem useful in the medical profession.
Now in behavior analysis, we sometimes act as if we are doing something similar to medicine. We are testing for the presence vs. the absence a behavior (or a cause of a behavior), and those results are used to make decisions. In reality, we just about never do that.
In behavior analysis, we can certainly demonstrate the presence of behavior or causes of a behavior. For example, we might do assessments and show that a student can label all his colors, washes her hands, or engages in problem behaviors in order to escape instructional demands.
But unlike the medical profession, in behavior analysis, there is no way to demonstrate that the absence of a behavior means that the behavior never occurs, or that a behavior in a learner’s repertoire it is never caused by a certain event. That’s because there always may be a situation that we haven’t tested where the learner engages in the behavior.
Understanding this is one key to avoiding misleading data and making poor decisions. For example, a common problem I’ve observed is that a BCBA starts working with a learner and conducts an initial assessment of skills. After intervention, a reassessment shows that the child has mastered hundreds of new skills very quickly. Now, that might be the case–sometimes. But it’s more likely that the child already had many of those skills in his or her repertoire. He just didn’t demonstrate them on the initial assessment due to problem behaviors or other reasons. Now that the team has improved problem behaviors and attention, the child has the opportunity to demonstrate many more skills.
In a similar manner, people will argue that a behavioral assessment demonstrated that a child does not have problem behavior motivated by attention since the assessment showed the behavior only occurred under escape conditions. That’s just silly, especially if you consider how this is typically tested. What happens is a child is left with little to do, then an adult (often unknown to the child) pretends they are busy. If the child engages in any problem behavior, the adult provides a reprimand. Of course, a child might not engage in any problem behaviors under those conditions, but still be the class clown when all the other kids are laughing at his antics in the classroom. That’s one reason I don’t do these types of assessments any longer. Greg Hanley and colleagues have demonstrated a better way.
In Behavioral Assessment, we can show that a behavior or cause of a behavior does occur. But we always must be careful about stating that a behavior doesn’t occur. All you really know is that under the circumstances you tested, the behavior doesn’t occur. That’s a critical difference, and misunderstanding this point leads to bad treatment decisions.