Tuesday 17 September 2013

Explaining Total Value Add


One or two people who have read the e-book I published last year with Alasdair Rutherford – Total Value Add, a new approach to evaluating learning and development – have asked for more information about how Total Value Add works.

Total Value Add includes two related ideas.  The first is that organisations derive a lot of value from learning and development, and need to capture more of it, as cost-effectively as possible.  The second is that different evaluation methods and tools lend themselves to different kinds of capture, and so a range of tools and techniques need to be applied in different situations.

Let’s take the first idea first.  Opportunities for development are increasingly seen as part of the benefits package of working for an organisation – at least by the more enlightened and ambitious employees – and so it makes sense to check that learners enjoy the learning opportunities.  Reaction sheets do this well, but do you really need to check the reactions of every employee to every learning experience?  Yet that’s what most organisations actually do, and it’s not cost-effective – judicious sampling would serve the same end, for a fraction of the effort expended.

That’s one example.  Another is that learning interventions do not necessarily provide the sole means to effect the kinds of behaviour change organisations need – but they may provide a spur.  Acting in concert with line managers dedicated to managing performance, learning interventions can help bring about improvement, perhaps by training line managers to be better coaches, or by providing performance support to employees on-the-job.  This sort of intervention requires analysis of the performance desired, a measure of the current level of performance, and means to record progress in improving performance.

Turning to the second idea, our first example should direct you away from universal distribution of happy sheets.  These are often poorly designed, usually poorly distributed, and rarely properly analysed or acted upon.  A better use of resources, and a better way of capturing value, would be to select representative samples, and interrogate them in more detail, perhaps using interviews rather than surveys.  The barriers here are that L&D practitioners lack survey design skills, question writing skills, and typically don’t know how to select a truly representative sample.

The second example calls for an evaluation method that tracks employee performance, and gauges the impact of L&D in improving that performance.  Possible methods include Business Impact Modelling, Dave Basarab’s Predictive Evaluation, or Ed Holton’s sixteen learning transfer indicators.  Most L&D practitioners – in the UK at least – will have scarcely heard of these methods, far less have the skills to implement them.  Yet neither of the more commonly recognised evaluation models – Kirkpatrick or ROI – features the means to tackle this sort of issue.

Part of the problem is that many L&D practitioners are not even conscious that they don’t know enough. We need to raise awareness of what’s involved in evaluating L&D, spread knowledge of a wider range of tools, and clarify what extra skills are needed.  More information can be found at www.airthrey.com

 

No comments: