I’ve been mulling this question over… is there a difference between “applying learning” and “performance”? I think there is, and here’s how I explain that to myself… Applying learning is that space in which a learner makes his or her first attempts at applying knowledge and skill and gets feedback (whether through own observation or from others). Performance is in the ongoing use of knowledge or skill on the job and the business outcomes that result. The application space and the performance space are overlaid, though, because both learning and application often happen on the job.
Why does this matter? Application of learning is critically important because it is the application that actually solidifies the learning. (For some theorists, you haven’t learned until you have applied.) It’s a step between learning and performance that is often the stumbling gap we see in the success of our learning efforts – some learning interventions fail because of a lack of application, not a lack of learning. Interestingly, we can also get performance without applying specific learned concepts, and we can have application without seeing a change in performance. The point is, these are different levels of action, and different realms of analysis, even if they occupy the same physical “space” (the work environment). The supports and barriers to application are NOT the same as the supports and barriers to performance.
These ruminations are the result of attending a Learning and Development Roundtable executive session on their 2008 study called “Bridging the Gap Between Learning and Performance.” Among other things, the study documented a truism we all know: that participants may LEARN, but if they don’t APPLY that learning, then we won’t get the performance bump we are looking for. Our response to that truism has often been to do performance analysis and make sure that there are no barriers to performance… but if we only do that, we are missing a critical interim step. We should also be ensuring that there are no barriers to application (which isn’t necessarily the same thing as a barrier to performance). LDR’s study showed that three characteristics of learning interventions drive application of learning: whether we design the program to promote application in the first place (what some would call performance-based design), whether learners are motivated to apply their learning (not motivated to learn, motivated to apply), and whether we address the barriers to application in the workplace.
When the topic of our learning intervention is simple job training, the line between application and performance is negligible and this discussion may be akin to naval-gazing. But when the subject of our learning project is more complex, like management skills, or professional skills, it’s less clear. Let’s say we teach a specific communication model to managers to support change management. It’s possible to have a successful change effort without ever having applied that particular communication model. We can applaud the outcome and yet still wonder if the training time was a wasted effort because the concepts taught were never applied. (If we are only looking at outcomes, we may never even notice the issue of unapplied learning.) Alternately, we could see no improvement in our change management because the concepts were never applied. Wouldn’t it be important to uncover why these concepts were not applied? Could we have discovered in the assessment phase that the chances of application were slim? Could we have done more to increase the chances of application? Should we have proposed a different approach that was more likely to be applied?
So I do think it’s important to consider learning, application, and performance separately. The discussion remonds me why Kirkpatrick’s training evaluation model suggests multiple levels of analysis. To evaulate the training-related influences to performance change (or lack thereof), we have to know if the target population learned (level 2), AND whether they successfully applied that learning (level 3). If a positive performance change doesn’t happen, we need to know where the breakdown is along the chain. (And we need to know whether there are non-learning-related barriers to performance as well.) But uncovering issues is too late on the back end; better if we do a more thorough job of analyzing the application environment and putting supports in place to help to bridge the gap between learning and performance. That more comprehensive view is included in Learning Environment Design.
Please share your own thoughts and examples along these lines. This is one of those things that feels so right (to me) but seems near impossible to put into words! I need to find a way to articulate this that makes sense to people other than myself, so your comments are very valuable.