How do you know you have the right product messaging?
We’ve all been there – you’re rolling out a new product and it’s time to finally write your headline. What you have is good… but how do you know if it’s just right?
To be honest, it doesn’t even have to be a new product, either. Marketers and product marketers are constantly rolling out website changes, product updates, and persona targeting that require new positioning and messaging – but knowing when you’ve arrived at the ‘right’ copy can feel impossible.
That’s why we sat down with Jason Oakley, Sr. Dir. of Product Marketing at Klue, to help us understand how to validate messaging and positioning, and the best ways to seek quantitative and qualitative feedback when making positioning changes. He even created a spreadsheet to help keep track of your messaging validation – download it below:
Name: Jason Oakley
Position: Sr. Dir. of Product Marketing at Klue
How did you get started with Product Marketing?
Although I had previously been in Sales, I really got into product marketing through my experience in customer success.
Prior to product marketing, I had done a little bit of field marketing and customer marketing, but out of the blue one day my boss came to me and said “Hey, Jason, you understand the customer really well, you understand our product really well, do you want to be a product marketer?”
So I tried it out, and I’ve been doing it ever since. I love it.
Tell us, what prompted you to create this framework?
I think it’s really easy for organizations, especially when they’re a small start up driven by a founder or a charismatic leader, to be in a situation where someone steps into a room and says “this is what I think our positioning should be”, or “this is how I think we should message our benefits”. And a lot of times, it’s the loudest person in the room who’s going to be the person who ends up dictating positioning and messaging, because they’re just so strongly opinionated and they feel like it should be this right.
So, for many product marketers, it’s easy to fall into a pattern where everyone just naturally agrees with that person’s positioning/messaging. It’s never truly validated, it’s only a person’s opinion. And let’s be honest, you don’t even necessarily need to have someone who’s like a super charismatic leader, just someone with an opinion who wants to drive these sorts of things.
Having an actual framework for validating messaging puts a process in place and creates a way to show through data, whether it’s qualitative or quantitative, which direction you should go. And it also takes the pressure off you in some ways, too. Many people feel like “Oh I’m the product marketer, I need to wave my magic wand and invent the messaging”, and that is a ton of pressure to put on yourself.
Creating a framework to validate and test your messaging/positioning opens up the possibility of creating several ideas and then legitimately testing them to see which one works the best – which not only leads to a better outcome, but a smoother process.
As a PMM, how do you know when you need structured feedback on positioning?
I’ll start with this: In a perfect world, you’d always go through this process and strictly validate your messaging.
I want to call out right now – me saying this probably makes it seem like I’m someone who always goes through this framework and validates everything he does. Unfortunately, I’m not.
In the real world, there are a lot of reasons you may not be able to validate a piece of messaging or positioning. You may not have the budget, or you may not have the time. But whether you use this framework or not, the one takeaway you should have is that you can always be validating your messaging.
For example, when I was at Chili Piper, we were customers of Metadata.io (by the way, I didn’t work with that team directly, but they put out a mountain of great marketing content – shout out to them). At that time they had a Customer Advisory Board, and when they were doing their website refresh they drafted up the new website copy and sent it out to their advisory board and just said: “just tear the sh*t out of this messaging, tell us what you don’t like”.
So, as simple as it is, that is a form of validation, right? It doesn’t cost anything – all you have to do is find who your customers are or who your target buyers are and just give them your messaging and ask for feedback.
If you’re in an environment where you don’t have the time or budget to go and formally validate your positioning, then treat it as a quest to find out who your biggest raving fans are and just target people exactly like them and use messaging and positioning that resonates with those people.
You list several different ways to collect feedback – can you give a brief example or best practice on when to use each?
Surveys: When I created the framework I put it at the top because, to me, it’s the first thing that you can try when coming up with something. You can use something simple like Google Forms, but there are also paid tools you can use that I’ve had success with, like UserTesting or Wynter that allow you to quickly get feedback.
Surveys are perfect for situations where you may have a ton of directions you could go. I remember in the past I’ve done a test with upwards of 10 different variations of something that we were trying, and having a survey in that situation very quickly creates some data on what the outliers are that we could easily throw away to then pursue more qualitative feedback. like these are the clear outliers and let’s move forward with them to get maybe more quantitative or qualitative feedback on those three in particular.
Interviews: After you’ve gotten some early data with surveys, you can then collect rich qualitative data with actual interviews.
Interviews are easy, and they don’t have to be live or in person. It’s as simple as asking your interviewee “give me your feedback on XYZ”, and presenting them with an open text field and allowing them to just spew feedback at you. Of course, you can do in-person interviews if you’d like to get more context, but you really don’t have to complicate things.
Social/Google Ads: Ok, so if you still have 6 different versions of your messaging out there, you’re just not going to get enough eyes on it through ads unless you want to pay a ton of money to do it.
But, if at this point you’re able to narrow it down to, say, two different variations and you’ve got the budget and time to put out a Google ad out there, it’s a great way to test keywords or taglines because it has the ultimate measure of effectiveness: do they click?
And if you have even more time or budget, you can create a dedicated landing page and then ask the question: do they click AND convert?
The important thing with ads is that you really have to narrow it down to just 2 or 3 options unless you have an outrageous amount of time and money to spend on testing. But if you have the resources, it’s the best way to validate your messaging – There’s no better way to know if your message resonates than finding if people are clicking on it.
There are a few different methods to collecting feedback, both quantitative and qualitative – any best practices for how to use qualitative feedback and quantitative numbers in conjunction to validate your messaging?
Ok this one is kind of tricky, because it’s unique to every situation. I’ll walk through a recent experience I had in my last job and explain how I like to use quant/qual feedback when I validate:
Me and my team were doing a website refresh and were trying to highlight and finalize the tagline we wanted to use on the homepage. For anyone who’s done a refresh before, you understand the kind of pressure you’re under to get it just right, so we started with a dozen variations of possible taglines.
We started with a simple preference test for our users, where they were asked “out of X options, which one do you prefer the most?” and they could optionally list why they liked their choice. That gave us a good combination of the quantitative survey-based feedback. And then the occasional feedback gave us some light qualitative information as well.
That process got us down to 3 final choices. And at that point, we could either put those back into another preference test and just test those three, and seek a bit more qualitative feedback, maybe make it necessary to provide a ‘why’ when the user is selecting their choice.
Unfortunately, the three taglines were all performing pretty similarly – we didn’t have a clear winner. So something that we found very useful was what I’ll call a ‘decision memo’ that can help teams to make a final choice after validation (for the record, we didn’t invent the decision memo, Jeff Bezos and Amazon have been using it for some time)
That’s a good point though – what do you do when you have detractors or naysayers that disagree with your final decision?
That’s a good question, because you’re going to run into that at a certain point regardless of what you’re testing
I personally really like the idea I’ve read about from Hubspot called “disagree and commit”. If you can start the process with the idea that you have to make a decision, and at the very least publish the data you’ll use to make that decision, it allows leaders to say they disagree with the decision, but will commit to making it as successful as possible.
If everyone can come in with that mentality of, “I’m going to argue this”, but at the end of the day as a group, you decide to go with another option, you can say that you disagree but are also going to commit to this thing because, really, that’s what we’re committing to.
In addition to being in product marketing at Klue, Jason is a person of many talents, and has previously led a successful Kickstarter campaign for a children’s book with an accompanying stuffed animal.