What About the Hard Part?

Recently, I thought it would be fun and useful to learn Spanish. I signed up for an online app that featured cute characters and gamified lessons with rewards, badges, and lots of fun activities. I did lessons on the app for a few minutes per day, and soon my “streak” was in the hundreds of days. Did I learn Spanish? Well, the data looked good. I had learned lots of words. I could understand much more than when I started. My pronunciation was improving. Was it useful? Hell no. I should have known better. What was the problem? Learning enough to make Spanish a useful skill would take hundreds, probably thousands of hours of work. That’s hard. At the rate of a few minutes a day, by some estimates, it would take around 20 years for Spanish to become a useful skill. That’s assuming there weren’t any lessons with teaching activities that required longer than a few minutes. That’s probably not true.

 

The app was fun and engaging. Maybe even useful as a supplemental learning activity. But I was unlikely to ever learn Spanish only using the app. Am I willing to commit the time and energy required to make Spanish a useful skill? I thought it through, and I decided the answer was no. I have too many other items on my agenda right now.

 

Most people who work in any form of education, from preschool through graduate level, love activities that make learning fun. Teaching concepts using singing characters, gamified apps, problems modeled on real-life examples, or thousands of other creative ideas, that’s much better than slogging through long, boring practice sessions.

 

Popular culture reinforces the idea that this is what great teachers do. I still remember my science teacher trying to replicate this lesson from WKRP in Cincinnati Dead Poets Society is filled with Mr. Keating’s memorable lessons about conformity, and, of course, the rip-it-out scene. The movie Stand and Deliver contains lots of examples such as the apple scene and the gigolo problem. Despite the obviously inappropriate nature of some of this, real-world teachers have actually used questionable content, and gotten in trouble for it. The allure of the fun, creative teacher is strong.

 

No one likes this type of teaching more than the people who work with children with autism and other developmental disabilities. I see great, creative examples of the “make it fun” hypothesis almost daily. The “make it fun” approach often leads to faster progress. The child increases their motivation, stays on task longer, and generally prefers fun activities to boring, difficult ones. That sounds great! If we use good judgment and avoid obviously inappropriate content, why not have fun? Including some fun lessons is fine. But you can have too much of a good thing.

 

Fun lessons can backfire. When you try to insert teaching activities into something fun, the child sometimes becomes annoyed–you are spoiling their good time. Even when the learner is having a great time, trying to make everything fun can have negative effects on both the learner and the teacher. The learner believes they never have to be bored or struggle to learn an important skill. And the teacher tends to evaluate how well they are doing based on whether the learner is happy and excited at the moment.

 

Often, it is relatively easy to make some progress at the beginning of learning a skill—but then the hard part comes. Learning the hard part may require struggle and, yes, boring practice. Learners who have only experienced the “make it fun” approach may not make it through the hard part. Teachers may be discouraged that the kids don’t look happy. Observers may think bored-looking kids mean the teaching must be terrible.

 

Usually, the critical thing that predicts how useful a skill will be is how good you are at the skill. If you don’t overcome the hard part, you may never use the skill in real life. Learning to sing along with the cute preschool letter videos is fun. Reading connected text can be hard. Riding a bike with training wheels is easy. Balancing yourself on the bike can be scary and difficult.  

 

Making lessons fun is a wonderful instructional strategy. Just don’t delude yourself that every lesson can be like that. And don’t teach learners that every lesson will be fun and games. Learners sometimes must do difficult things that aren’t fun. Until you’ve mastered the hard part, the skill isn’t useful. Don’t waste time learning new skills if you aren’t going to put in the time and energy to get through the hard part. If you aren’t prepared to do the hard part, it’s better not to start.

 

On the other hand, if you are a teacher and are being evaluated, I’d definitely pull out the fun lessons. The evaluators go to the movies and watch TV too. They know the stereotype of what good teaching looks like.

 

Behavior analytic services should only be delivered in the context of a professional relationship. Nothing written in this blog should be considered advice for any specific individual. The purpose of the blog is to share my experience, not to provide treatment. Please get advice from a professional before making changes to behavior analytic services being delivered. Nothing in this blog including comments or correspondence should be considered an agreement for Dr. Barry D. Morgenstern to provide services or establish a professional relationship outside of a formal agreement to do so. I attempt to write this blog in “plain English” and avoid technical jargon whenever possible. But all statements are meant to be consistent with behavior analytic literature, practice, and the professional code of ethics. If, for whatever reason, you think I’ve failed in the endeavor, let me know and I’ll consider your comments and make revisions, if appropriate. Feedback is always appreciated as I’m always trying to POOGI.

Responding to Conspiracy Theories: A Guide for BCBAs

In many fields, science-based professionals have recognized that “the amount of time and effort needed to refute bullshit is an order of magnitude bigger than that needed to produce it.”

It takes almost no time to misinterpret a research study, take things out of context, come up with crazy hypotheses, or even make up “facts” that aren’t true. On the other hand, correcting inaccuracies in the conspiracy theory arguments can take an enormous amount of time. Often, conspiracy theory arguments contain a hint of truth that can make them sound quite convincing. This is why some professionals have been embarrassed in live “debates” when they were obviously on the right side of the science.

In other fields, many scientists have taken the position that it doesn’t make any sense to have live debates with conspiracy theorists. A data-based, carefully thought-through argument takes a lot of time. It is unreasonable to expect anyone to respond in real time when the other side just makes crap up. It’s much better to do your debunking in writing.

This problem will often put BCBAs in a tough spot. We simply can’t do everything in writing. In most cases, decisions on client recommendations are made at meetings. It is easy for the conspiracy theorist to present complete nonsense that can be very hard to debunk on the spot. This can lead to negative outcomes. For example, the client may end up with non-science-based therapies that don’t help their life. Another negative outcome is the BCBA can lose the support of the team, and be thought of as a “BCBA-hole” when coming out hard against the recommendation.

The solution to the problem is difficult. First, recommendations not to be a “BCBA-hole” are appropriate; I wish someone would have told Barry from the Bronx 30 years ago. On the other hand, don’t assume that good social skills or iron-clad reason and logic will solve this problem. To see why good social skills and logic won’t handle this problem, just watch Anderson try that. Those skills are necessary, but not sufficient.

Some BCBAs (including me) have argued for a data-based approach to this problem. “Let’s first collect data with intervention X and then without intervention X and look at the differences in performance.” I have done this many times, and although it sounds simple, it can add many, many hours of work. It can be effective—sometimes–but not usually. Why not? The conspiracy theorist can easily come up with multiple bogus excuses for why intervention X wasn’t effective and what should have been done differently. That doesn’t take barely any time at all. You’ll have them state the requirements in advance? Again, see what Anderson thinks about that. Only rarely should we take on a commitment to “debunking” work. That’s an enormous amount of time and effort and will rarely be effective.Continue reading

What Can Applied Behavior Analysis Learn from Kidney Transplant Centers?

In the US, thousands of people die each year waiting for a kidney, yet thousands of donated kidneys are thrown away. Why? There are many reasons, but one of them is problematic measurement systems. Kidney transplant centers are evaluated on their success rate. They are required to have a 98% success rate. Anything less will lead to serious negative consequences. What happens when they get a donated kidney that isn’t perfect, but it’s pretty good? Let’s say the estimate is a 90% chance of saving the patient’s life. What should the treatment center do? If they want to stay open, they should probably throw it in the garbage. Having a 90% success rate would be devastating to the center’s ratings. There is no penalty for not trying. 

 

It is my opinion that this type of problem occurs all the time in applied behavior analysis programs for children with autism. Most programs set goals and objectives that have a very high chance of success. That’s because we need to report the learner’s progress on those goals and objectives. Having several goals that weren’t successful will look bad on the report, even if there is no formal evaluation system. So, if we aren’t sure we can meet a goal, we often do not include it even though reaching that goal would greatly improve the child’s quality of life. There is no penalty for not having the goal on the report and a possible penalty for including the goal and failing to meet it. This can shape the behavior of whole agencies. Contingencies matter. 

 

This problem can be quite subtle. For example, let’s say a learner at a school or clinic is having fantastic success. The supervisors, parents, and staff are celebrating everything the learner has accomplished. But how do the staff who work with a difficult-to-teach learner sitting just a few chairs away feel? Their learner has made some progress, but may never be able to match the accomplishments of the highly successful learner. Even if the supervisors provide staff with lots of positive feedback, staff will compare their learner’s progress to that of the highly successful learner. There are a variety of management strategies that may help staff morale in a case like this, but it sure is a challenge. This is why I believe many programs struggle to find staff willing to work with the most difficult-to-teach learners, even without formal evaluation criteria. Formal systems can sometimes exacerbate the problem. 

 

These are just a few examples. If you look for this problem, you will see it everywhere. We should evaluate these types of phenomena in applied behavior analysis. It is a bit harder to do the research in ABA than in medicine as we don’t usually have easy-to-measure binary outcomes like death. But just because it is difficult to measure doesn’t mean it isn’t important. 

 

Behavior analysts attempt to measure the quality of ABA services in a variety of ways. That’s extremely important, but we need to focus on how we implement measurement systems. I’m willing to bet on many negative outcomes as a result of well-intentioned attempts at measuring success. 

Behavior analytic services should only be delivered in the context of a professional relationship. Nothing written in this blog should be considered advice for any specific individual. The purpose of the blog is to share my experience, not to provide treatment. Please get advice from a professional before making changes to behavior analytic services being delivered. Nothing in this blog including comments or correspondence should be considered an agreement for Dr. Barry D. Morgenstern to provide services or establish a professional relationship outside of a formal agreement to do so. I attempt to write this blog in “plain English” and avoid technical jargon whenever possible. But all statements are meant to be consistent with behavior analytic literature, practice, and the professional code of ethics. If, for whatever reason, you think I’ve failed in the endeavor, let me know and I’ll consider your comments and make revisions, if appropriate. Feedback is always appreciated as I’m always trying to POOGI.