One of the things that I have grown to appreciate over the past few years is marketing. One of the first things I wrote on the topic was actually around what Chief Marketing Officers can teach Chief HR Officers. There is quite a bit of activity that goes on in the marketing department that we should all appreciate. From tailoring your approach to your audience to relentlessly testing your campaigns, there are some great insights in how they operate. Today we're going to specifically talk about split testing.
The easiest way to explain split testing is this:
Let's say I walk up to you and hold out a piece of cake wordlessly. When the next person comes by, I hold out the same type of cake in the same way, but I smile and say, “Hello!†cheerfully.
That's a split test, or A/B test. The point is to make every element of the scenario the same except for a single item that you're explicitly testing—in this case, the greeting. Over the course of multiple tests (dozens or more, not just two or three apiece) you learn how that item affects the outcome of the experiment. Then you do it again but with another element being the item tested.
The HR Impact
But what does this have to do with HR? Well, I read a case study last year about a company implementing an HR program, and they did what is called a multivariate test. It's basically a split test but with multiple possibilities being tested at the same time. As I read the story it made me stop and think about the times that we have assumptions that make us skip over these opportunities to learn more and make our practices better.
In the case study, the organization was trying to see if the quality of the training simulation/game had an effect on the outcome. In addition, they were also testing to see if a charitable donation would impact the outcome in any measurable way. Here's how they set up the experimental groups:
- Group 1: High-Fidelity Simulation with a donation to a charitable cause
- Group 2: High-Fidelity Simulation without a donation to a charitable cause
- Group 3: Low-Fidelity Simulation with a donation to a charitable cause
- Group 4: Low-Fidelity Simulation without a donation to a charitable cause
In the end, the company found that the low-fidelity simulation received the same amount of attention as the high-fidelity, though it cost much less to develop. Also, as far as the charitable donation was concerned, there was no specific impact whether the company did it or not.
Think about your own role. What can you test in your day to day? Here are a few examples of how I've incorporated testing in the past to improve my results and response rate.
- Sending out normal/mundane “communications†emails to all staff. I would send a “normal†one and the next week I would send one with funny title, text, or something else within the body. I quickly learned that adding something funny or interesting would get more reads than the standard, even if the information was identical.
- Working on open enrollment communications, I would send out slightly different text to different groups of employees locally to communicate the same concepts and news. I would get feedback from the employee groups on what they liked or didn't like, and then I would use that information to tweak the messages to the rest of the organization's employees.
These are based on the many communications we send on a daily basis, but you can see how just a little bit of testing can help to improve your effectiveness. Have you ever used A/B testing in your HR practices, whether formally or informally? I'd love to hear about it.
Split testing is an interesting topic. We all do it whether we know it or not, perhaps not in the truest sense though. I find that keeping data records helps to build a picture of what is working and what isn’t so as to tweak and ultimately improve results in the long-term.