Getting started with Conversion Rate Optimization

Before you embark on a conversion rate optimization program, it’s important to assess your current resources, infrastructure, and processes to ensure that you can successfully execute, report, and iterate. This guide will help you get started. 

Part I: Audit

Ask yourself the following questions to evaluate your readiness to proceed with your CRO program and fill any gaps.

Goals 

  • What’s the primary goal of your site or app? 
  • What’s the current conversion rate of your site or app in terms of visits/purchases? 

Traffic sources and characteristics 

  • What drives traffic to your site or app? 
  • What’s the volume of daily, weekly, or monthly traffic to your site or app?
  • Do you have the option of a logged-in state on your site or app? 

eCommerce capabilities 

  • Is the purchase mechanism outsourced or built in-house? 
  • Do you have the ability to make changes to the checkout process? 

Content sources

  • How often do you create content? 
  • What types of content does the site support (e.g., featured content, blog posts, videos, tips, tutorials, data snippets or examples, case studies, interviews, audio files, etc.)?

Measurement and reporting tools 

  • What technology do you use to capture analytics? 
  • Does your analytics platform capture transactional data (i.e., costs)?
  • How do you implement additional marketing tags? 
  • Do you use a tag management system? 

Regional considerations 

  • Do you have sites or apps for multiple countries? 

Part II: Best Practices

Align strategies with business goals
It’s tempting to dive in and start testing every program or hypothesis that seems promising. You’re enthusiastic, you want to show ROI as quickly as possible. Still, it’s important to stay focused and ensure that your CRO tests align with your overall business goals and objectives.

To do that successfully, start with a strategy. Ask yourself: What are your company’s goals? If you have revenue goals to meet, can you tap into business strategies to help you achieve them?

Also, can you experiment with messaging to support your business strategies? Creative expression? Layout? Functionality? Answering these questions can help you focus. 

Develop a framework and a process
If your company is just starting out with testing, it’s going to take a little time to shift the mindset and change the culture. Your ability to quickly implement a program will depend on the organization’s openness to change and dedication to taking the business to the next level. To develop a framework and a process, you’ll need:

  • Operating model alignment: It’s critical that you get buy-in across your organization. This might involve asking other teams to support your efforts or asking permission to take on some other tasks like designing pages.
  • Process design: How you work will depend on your organization. See suggestions in the Process section of this guide. 
  • Change impact and readiness: It’s important to understand how testing will affect other teams.
  • Organizational design: You need to decide whether to keep testing in one team or share responsibilities with other teams.
  • End-user training: Determine whether your team has the necessary skills to proceed with a testing program.

If you’re doing all of the testing yourself, you don’t have to worry as much about process. However, if you’re working across divisions or departments, it’s best to establish and agree upon roles, responsibilities, and processes up front. Roles include: 

  • Strategist or optimization manager: The primary driver, this team member conceives test ideas, runs them through a rigorous approval process, and takes overall ownership. 
  • Campaign enablement specialist: This team member sets up the tests in a testing tool like Optimizely or Adobe Target. He or she may also be the strategist or optimization manager. 
  • Data analytics manager: The analyst ensures that any test plan will yield a plausible outcome. He or she also analyzes the test results and makes recommendations.
  • Web producer: This HTML specialist can help if a test is too complicated to edit within a testing tool. 
  • Project manager: This team member drives the process for test development, review, and execution to keep things moving forward. This role often falls on the strategist. 
  • Developers or engineers: These specialists may be needed for functionality tests or other complicated scenarios.
  • Quality Assurance lead: The QA lead will ensure that all test experiences (also called “challengers” or “recipes”) load, display, and behave as expected prior to execution.
  • Creative experts: These are the designers, writers, and editors who can provide the content for your challengers.  
  • Customer experience experts: These team members can offer perspective on how tests might impact customer experience.

Generate test ideas and execute on them

How do you know what to test first? In addition to aligning your tests with business goals, look at your data on customer and prospect behavior and ask yourself “Why?” Identify which pages on your site yield the highest conversion, have the highest volume of traffic, or demonstrate the biggest areas of opportunity. Understand which demand generation activities are in market and where they’re driving traffic. You might consider experimenting with targeted experiences from those channels. 

When determining types of CRO tests to run, it’s important to stay focused. If you test too many different elements or mix between types of tests, you won’t be able to answer the question “Why?” Keep tests focused on the following areas:

  • Content: Experiment with different messaging, headlines, or types of copy. 
  • Appearance: Try a new approach to creative expression (e.g., image, color, and font). 
  • Layout: Make a change to the position of elements on a page. 
  • Functionality: Look at elements like dropdowns and checkboxes. It’s pretty tricky to test functionality, but sometimes it’s necessary. 
  • Existence: Consider whether you need every element of content on the page. We often end up with lots of content because we want to say so much, but sometimes less is more. 

Resist the temptation to mix and match in your tests. While it’s technically feasible to change multiple variables in a challenger, it’s unlikely that you’ll glean insights because you won’t know which variable caused your test to perform better or worse. 

As a best practice, start with an existence test to find out which elements make a difference. Once you determine that you have the right elements, you can try moving them around with a layout test or giving them different creative treatments with an appearance test. 

Finally, do the easiest test and experiment with content. For example, try different messaging or specific words to define a product feature, or experiment with catchy headlines.

Functionality is by far the toughest thing to test, so you probably have a good reason to want to test it. Do you have access to usability studies or customer satisfaction reports from your site? These can tell you if the user experience is amiss and you need to intervene.

Part III: Implementation  

Infrastructure

You’ll need the following tools and templates (available at www.type26.com/cro-resources) to launch your testing program: 

  • Test charter template: Use PowerPoint or some other medium to capture your test charter, details, and test results.
  • Test tracker: Use Excel, Google Sheets, Trello, or another type of tracking system to record test types, stages, and results. 
  • Repository: Use Dropbox, Box, Google Drive, or another service to store your test charters. 
  • Communication vehicle: Use a blog, wiki, or other means to report on tests and retrieve information from past tests.  
  • Testing tool: Use Optimizely, Apptimize (for mobile apps), or Adobe Target to set up and measure your tests.
  • Analytics tool: Use a web analytics setup like Google Analytics or Adobe Analytics.  
  • List manager (internal, optional): Use a list manager if you set up a test program newsletter to communicate to a large organization. 

Process 

Different roles are referenced below, but it’s common for some individuals to perform multiple roles. 
1. Develop, review, and approve the test charter
Driver: Strategist
Contributors: Analyst, designer, editor/writer, engineer/enablement specialist, business stakeholder
A CRO test charter includes the test objectives, summary, variations (these can eventually be mockups from design), expected conversions, dates, and (ultimately) results. 
Meet as a team to review the test in detail, or provide a high-level summary for the team and then follow up with individuals later. (For example, meet with the designer who will create the test web page comp.) The engineer or enablement specialist should ensure that the test is buildable, and an analyst should verify that there are no flaws in the test plan. 
2. Record the test charter
Driver: Strategist
Add the test charter to the test tracker, which records test objectives, justification, a test charter number or unique ID for tracking purposes, test type (i.e., content, layout, functionality, existence, or appearance), level of effort (i.e., low, medium, or high), potential revenue lift (i.e., low, medium, or high), and stage (i.e., in review, in development, live, in analysis, or completed). 
The level of effort should be determined by the engineer, and the potential revenue lift should be determined by the analyst. As a test moves through each phase, different contributors should update the status.
3. Develop assets
Driver: Project manager (or strategist)
Contributors: Designer, editor/writer
Prior to handing off the test for development, the project manager should ensure that all assets are ready. He or she will facilitate the creation of design elements or editorial content and get them approved for development.  
4. Develop the test
Driver: Engineer or enablement specialist
The engineer or enablement specialist should receive all assets and the final test charter for guidance in developing the test. 
5. Perform quality assurance
Driver: QA lead or strategist
The engineer should hand off to the QA lead, who ensures that all experiences are displaying correctly. You can also opt to perform QA yourself by testing the display on different browsers, operating systems, and platforms. You’ll need to clear your cache to ensure that the system thinks you’re a new visitor each time.  
6. Get approval to execute the test
Driver: Strategist
If all challengers are displaying correctly, you’re almost ready to give the engineer approval to execute the test. However, there are a couple things to keep in mind:
Don’t execute tests on Fridays. If there are issues with the test, it won’t be easy to find people to help you over the weekend.
The strategist should communicate to the larger organization that the test is launching. Otherwise, someone will undoubtedly notice the change to the usual experience and go into panic mode. See the Communication section below for best practices.
7. Execute the test
Driver: Engineer
Tests should run for at least two weeks (three is ideal) to ensure that there are no anomalies introduced by one particular day or week. There should be enough traffic to generate the desired number of conversions per challenger. In high-volume sites, 250 conversions per challenger is statistically significant. In lower-volume sites, perhaps 100 is more reasonable. If your site doesn’t get enough traffic to generate 100 conversions in three weeks, consider increasing the test duration to four or five weeks. 
In your testing platform there’s a notion of “confidence level,” which refers to the likelihood that you’d get the same outcome if you repeated the test. You can definitely move forward with results that have a confidence level of 97% or higher. A confidence level of 90-97% is directional at best, and you probably don’t want to move forward with a confidence level of 90% or less.
8. Monitor the test
Driver: Engineer and strategist
The engineer needs to ensure nothing goes wrong with the test, and the strategist should also keep an eye out for any issues that might require stopping the test. If all is well, the test can conclude at the scheduled time. 
9. End the test
Driver: Engineer and strategist
The engineer should confirm with the strategist that enough conversions have occurred or that there aren’t additional business reasons to continue to run the test. 
10. Conduct reporting/analysis
Driver: Analyst or strategist
Depending on the structure of the team, either the analyst or the strategist can conduct the final reporting of insights. The test driver should update the test tracker to communicate that the test has ended and analysis is underway. Once the analysis has been completed, the driver should also include the lift, projected revenue, and any other KPIs.
11. Communicate about the test
Driver: Strategist
It’s the strategist’s responsibility to aggregate the test charter, test results, any insights, and next steps prior to communicating more broadly. It’s helpful to append test results to the test charter so everything’s in one place. 
To make it easy to access test data in the future, you should keep all your charters in one place and use a wiki, blog, or database to summarize results and tag tests in ways that are meaningful to your organization.
While tests conducted a year or even six months ago may not be applicable today, it can still be valuable to reference the outcomes when planning future tests. 
12. Refine and repeat
Feed your learnings into your next series of tests. Your testing pipeline will be endless. As you increase the velocity of your program, you can refine your process to better fit your organization. 
Communication
Since CRO tests are ongoing, it’s important to establish how you’ll share information about them within your organization. Consider the size and culture of your organization as you review these suggestions:
Weekly newsletter: Cover current tests, upcoming tests, and tests that have recently concluded and are being analyzed. Link to the test charters so readers can find answers to any questions. 
Test launch communication (as needed): It’s critical that stakeholders are aware of a test when it launches. You don’t want someone to see a test, assume that something’s wrong, and create a firedrill to fix it. 
Monthly test results meeting: By inviting everyone to attend, you can tout the work your team has done and give people the opportunity to ask questions and even propose test ideas.  
 
Summary
Embarking on a comprehensive CRO testing program can be daunting. We hope you found this guide to be helpful, and we’d appreciate any feedback or best practices you have to share. Get in touch at www.type26.com/feedback. We’d love to hear from you!