Credits to Richard Kuwahara
CMO at Paubox, Inc.
_____
Segment & define your Saas platforms value proposition
It’s been said by a lot of people smarter than me that “Growth is a team sport.”
That means everyone needs to be involved in order for your company to “win.”
At Paubox, we failed at getting everyone involved early on, and it’s still tough.
Even though there’s only two of us in the marketing department, growth experiments and tests are run by everyone on the team including Customer Success.
But, as a startup that’s growing fast, there’s a lot of day-to-day tasks that divert attention from growth. Especially when it’s not part of the “core” job duties.
Trying to organize and manage how we log new ideas, prioritize what to test, and communicate learnings took a little while (and can still use improvement). But we’ve got a system in place now.
Here’s what we put together.
Putting together a framework
The first step we did was establish a framework of how we wanted to track everything.
I wrote a little more about how high-tempo testing works, and it was the commitment to getting out at least one new test a week that drove our need to organize everything.
While I had a log going before just for me, it wasn’t until we went through 500 Startups, that we got on the same page with experimentation and tracking. I combined their experiment log with the one I had going and come up with the below.
We chose to create a Google Sheet to track our experiments. We set up three tabs:
- Backlog – to put down ideas of experiments to run
- Status – to track the experiments that were completed and are in progress
- Definitions – to make sure everyone understood how to use the damn thing
Let’s break down the tabs.
The Backlog tab
Our Backlog tab has 11 columns on it, each has a filter so we can quickly sort through the mess if needed. You may not need all of these, but here’s the columns we use.
- Name – no brainer what this is for right? We used to be stricter on naming convention to track the channel, but…
- Channel – we added this column to list the distribution channel instead to make it easier to sort.
- Description – a brief description of what the experiment is for.
- Category – since we went through 500 Startups, we’re partial for Pirate Metrics, and each experiment falls under either Acquisition, Activation, Retention, Referrals, or Revenue.
- Pain Point – this was suggested to us during Marketing Hell Weekand we kept it. This makes sure that each experiment has a pain point you’re trying to solve for your company so it’s not just busy work. But you could probably get away with eliminating this.
- ICE – the next four columns are dedicated to coming up with an “ICE” score as a way to help prioritize things. ICE stands for Impact, Confidence, and Ease. Developed by Sean Ellis, this simple 1-10 rating makes it easy to see how some things may surface to the top as no-brainers. You can hear Sean talk more about it here.
- Owner – whoever is going to run the experiment
- Hypothesis – an “if/then” sentence to verbalize what it is you’re testing. If you can’t articulate this, then don’t move forward because you won’t be able to design your experiment effectively. We use this template: If successful, this [variable] will increase by [amount] because [assumptions]
Once an experiment is chosen to be tested, we highlight that column in green to make sure it’s not accidentally duplicated.
The Status tab
When we pick an experiment to test, the info gets transferred to our Status tab, which is where we can get a quick snapshot of how an experiment did and any general learnings.
Here’s the info we track:
- EID – this stands for Experiment ID, and is just a quick way to catalogue overall experiments for quick reference later.
- Experiment Name – self explanatory.
- Channel – just like the Backlog tag, the distribution channel the experiment will be focused on.
- Exp Owner – experiment owner.
- Status – one of six statuses (may be overkill), Idea, Defining, Defined, Testing, Analyzing, Complete.
- Category – Pirate metrics again: AARRR!
- Start Date – the date you’ll start the experiment.
- End Date – the date you plan on ending the experiment.
- Metric – the main metric you will use to gauge if an experiment is a success.
- Prediction – what you think the main metric will be at the conclusion of the experiment.
- Actual – the actual results at the end of the experiment.
- Mktg Estimate – how much time you think it will take the marketing team to execute the experiment.
- Eng Estimate – if engineering needs to be pulled in, how much time it will take them to make changes (like changing onboarding messages in an app).
- Budget – what’s the expected cost to run the experiment, like Adwords costs.
- Result – was the experiment a Success, Failure, or Inconclusive
- Notes – put in any learnings, mitigating circumstances that explain the outcomes, etc.
Once a test is done, we color code the row so we can visually see if a test was successful or not.
The Status tab is a great way to get an overview of what’s worked in the past and what’s currently being tested. But where it fails is in giving any detail on the test itself.
If there’s any marketing collateral, screenshots, data insights, etc., there’s just not enough room on a Google Sheet.
So we create individual Experiment Docs for the more intense tests.
What’s an Experiment Doc?
We use an Experiment Doc as a way to document the details of how a test is setup and any collateral that is created for it.
You can track this in a Google Doc, Word, but we use Confluence to manage things.
The document is pretty simple, even though it can take awhile to complete because it requires a lot more detail than the spreadsheets we just covered.
Here’s the sections we use:
- Naming – typically you want to have a naming convention to help track everything. We use the EID number followed by the name put in our Google Sheet.
- Date – the dates your experiment will run.
- Hypothesis – same as what you put in the backlog. In addition, we put a grid to track our main success metric.
- Experiment Design – maybe the most important part of the doc, this section should capture each step used to setup the experiment. Another member of the team should be able to replicate your experiment by following these steps.
- Results – aside from your main success metric, put any other important metrics in this section, like CTR and CPC for an Adword test.
- Learnings – put down what you learned after the test was done. What worked, what didn’t and why. Include as much insight as you can.
- Action Items – put action items down of what needs to be done to iterate on your experiment.
- Collateral – put any screenshots and links to show what was done. Like a screenshot of a Facebook ad or A/B versions of a landing page you’re testing.
Enter Trello cards
These documents help with tracking experiments, but when you’re managing a team it helps to have a quick view of what everyone is working on.
Enter Trello for the win.
We have a Marketing Team board that helps us see what items we’ve got queued up and what experiments everyone is running at any one particular time.
First is a List that shows items in the backlog, then each person has a List that shows what they are working on.
When done they drag the card over to a final List that shows what was completed in the last 30 days.
As a bonus, if there’s little side projects or items that are non-growth related it’s tracked here too. It’s great to see how much is on your team’s plate to better prioritize and manage workloads.
Very simple and works for us.
Putting it all together
The last step is simply to put everything together.
We have weekly marketing team meetings on Friday where we run through the Trello board and any new experiments we want to run. Basically a what happened this past week and what’s up for next week.
On Sunday’s I put together a quick bullet point list of things that we’re working on for the upcoming week and put that in an email out to our whole team.
Monday’s we have a weekly all-hands meeting where I pull out highlights of the emailed list and go in depth on important topics.
If anything comes up mid week I send out a meeting with updates and a link to the Experiment Doc so everyone can see details and stay up to speed.
We’re still improving this process, but it’s been working for us lately and helped us keep a good cadence of experiments.