A step-by-step guide to conversion rate optimization (CRO)

Before you embark on a conversion rate optimization program, you must assess current resources, infrastructure, and process before you can execute, report, and iterate. This guide will help you get started. 

Use the links below to jump to a specific section.

Part I: Audit

[ Back to top ]

Goals 

  • What is the primary goal of your site? 
  • What is your site or app current conversion rate (visits/purchases)? 

Traffic sources and characteristics 

  • What drives traffic to your site or app? 
  • What is the volume of daily, weekly, or monthly traffic to your site?
  • Do you have the option of a logged in state on your site? 

eCommerce capabilities 

  • Is the mechanism to purchase outsourced or built in-house? 
  • Do you have the ability to make changes to the checkout process? 

Content sources

  • How often is content created? 
  • What type of content does the site support (featured content, blog, videos, tips, tutorials, data snippets or examples, case studies, interviews, audio files, etc…)?

Measurement and reporting tools 

  • How is the site set up to capture analytics? 
  • What technology is utilized for capturing analytics? 
  • Does your analytics platform capture transactional data (costs)?
  • How are additional marketing tags implemented? 
  • Is there a tag management system utilized? 

Regional considerations 

  • Do you have multiple country sites? 

Part II: Best practices

[ Back to top ]

Align strategies with business goals

It’s tempting to run off and test things that seem really cool, but stay focused and ensure you’re aligning with overall business goals, objectives, and even strategies. 
It may seem obvious, but there’s often the impulse to just start testing everything everywhere. You’re enthusiastic and want to show ROI immediately, but you need a strategy. What are your company’s goals? Most often there are revenue goals you’re looking to meet and there are business strategies you need to tap into to help get you there. 

Is there messaging you can experiment with to support the business strategies? Creative expression? Layout? Functionality? Answering these questions can help guide you. From here you can go from strategy to execution.

Develop a framework and process

It’s no question that if your company isn’t doing testing yet, it’s going to take a little time to shift the mindset; it’s a culture change. Depending on how open teams are to change and taking your business to the next level, will determine how quickly you can implement a program. You’ll need:

  • Operating model alignment: You’ll have to get buy in across your organization. It might mean asking other teams to support your efforts or ask permission to take on some other tasks like designing pages.
  • Process design: It’s going to depend on your organization how you work. Suggestions to follow. 
  • Change impact and readiness: How does adding testing affect other teams?
  • Organizational design: Do you keep it all in one group or do you share responsibilities with other teams?
  • End user training: Does the team have the skills they need? If you’re doing it all yourself, you don’t have to worry about processes as much. However, if you’re working across divisions or departments it’s best to have roles, responsibilities, and processes established and agreed upon. The roles are: 
  • Strategist or Optimization manager: The primary driver for conceiving test ideas, running the ideas through a rigorous approval process and taking overall ownership. 
  • Campaign enablement: Sets up the tests in a testing tool like Optimizely or Adobe Target. May also be the Strategist or Optimization manager. 
  • Analytics: Ensures the test plan will yield an outcome that is plausible. Will analyze the test after conclusion and if appropriate, make recommendations.
  • Web production: If a test is too complicated to edit within a testing tool and HTML skills are required. 
  • Project management: Ensures the process for test development, review, and execution is moving along and is accounted for. Often falls on the Strategist. 
  • Development/Engineering: If a test is more complicated, engineering resources may need to step in (e.g. functionality test).
  • Creative: Design or writing expertise may be needed to aid in executing tests.  
  • Customer experience expertise: Perspective on how a test might impact customer experience. 

[ Download a PDF of the Guide to Conversion Rate Optimization (CRO) ]

Generate test ideas and execute on them

How do you know what to test first? In addition to starting with business goals alignment, look at what your customers or prospects are telling you through data and ask yourself why. Analyze which pages on your site yield the highest conversion, have the highest volume of traffic, or demonstrate the biggest area of opportunity. Understand what demand generation activities are in market and where the traffic is driving to. You might consider experimenting with targeted experiences from those channels. 

Types of tests are important to stay focused. If you test too many different elements or mix between types of tests, you won’t be able to answer the question “why?” Keep tests focused on the following areas:

  1. Content: experimenting with different messaging, headlines or types of copy. 
  2. Appearance: the creative expression ( image, color, or font?). 
  3. Layout: where elements fall on a page. 
  4. Functionality: Do you have a dropdown or checkbox for example. It’s pretty tricky to test out functionality, but sometimes it’s necessary. 
  5. Existence: An interesting type, this test questions whether you need every element of content on the page or not. We often get bogged down with lots of content because we want to say so much, but sometimes less is more. 

Resist the temptation to mix and match. While it’s technically feasible to do so, I guarantee you won’t glean any insights because you won’t know why something performed better (if it does at all). 

Ideally, you’d want to find out first which elements make a difference (existence test). If you’ve determined you have the right elements, you can try moving the elements around (layout test) or treat them differently creatively (appearance test). Lastly, and certainly easiest, experiment with content. You can try different messaging, specific words to define a feature of your product, or maybe experiment with catchy headlines.
Functionality is by far the toughest thing to test. And because it is, you must have a pretty good reason to want to. Do you have access to usability studies or customer satisfaction reports from your site? These are good indicators if the user experience is amiss and you need to intervene.

Part III: Implementation

[ Back to top ]

Infrastructure

You’ll need the following tools and templates (found here): 

  • Test charter template: PowerPoint or some other means to capture your test charter, details, and ultimately the results of the test. 
  • Test tracker: Excel, Google Sheets, Twello, or another type of tracking system to record tests, type of test, stage the test is in, and results. 
  • Repository: Dropbox, Box, Google Drive, or other repository to house all the test charters. 
  • Communication vehicle: Blog, wiki, or other means to retrieve past tests.  
  • Testing tool: Optimizely, Apptimize (mobile apps), or Adobe Target to set up and measure your tests.
  • Analytics tool: Since you’re looking into testing, there’s an assumption that you have web analytics set up like Google Analytics, Adobe Analytics, or others.  
  • List manager (internal, optional): If you set up a newsletter, it makes it easy to manage if your organization is large. 

Process

For the sake of clarity, different roles will be referenced, but it’s perfectly fine for individuals to straddle some or many roles. 

1. Test charter is developed, reviewed, and approved
Driver: Strategist
Contributors: Analyst, Designer, Editor/writer, Engineering/enablement, Business stakeholder

Test charter includes the objectives, summary of test, different variations (eventually mockups from design), expected conversions, date of test, and ultimately the results. 

You can meet as a team to review the test in detail or provide a high level summary with the team and then follow up with individual groups (e.g. a designer to design a page) later. Engineering or “enablement” should ensure it’s buildable and an analyst should be leveraged for checks and balances ensuring no flaws are in the test plan. 

Test charter executive summary

Test charter executive summary

2. Record the test charter

Driver: Strategist

Add the test charter to the test tracker, which records objectives, justification of the test, a test charter number (or unique ID of some sort to keep track of each test), test type (content, layout, functionality, existence, appearance), level of effort (low, medium, high), potential revenue lift (low, medium, high), the stage it’s in (in review, in development, live, in analysis, completed). 

Level of effort is determined by Engineering. Potential revenue lift is determined by Analyst. As a test moves through each phase, different contributors change the status.

Test tracker scoring 

Test tracker scoring 

3. Develop assets
Driver: Project Manager (or Strategist)
Contributors: Designer, Editor/Writer

Prior to handing off for development, all assets need to be ready. The project manager will facilitate the creation of design elements or editorial content. Upon completion and approval, the test is ready for development. 

[ Download a PDF of the Guide to Conversion Rate Optimization (CRO) ]

4. Development
Driver: Engineering

Engineering/enablement will receive all assets and final test charter to use for guidance in developing the test. 

5. QA
Driver: QA (or Strategist) 

Once Engineering hands off to QA, QA ensures all experiences are displaying correctly. You can also do QA yourselves by ensuring different browsers, operating systems, and platforms are covered. You’ll need to clear your cache to ensure the system thinks you’re another visitor.  

6. Approval to execute test
Driver: Strategist

If all experiences (recipes) are displaying correctly, Engineering has the OK to execute the test. First, don’t execute a test on a Friday. If there are issues, you typically can’t find people to help you over the weekend. The strategist also needs to communicate that the test is launching. If you fail to do this, I guarantee someone will see a strange experience and go into a tailspin. It’s best to communicate often. 

You can either use a one-off email template or if you’re running lots of tests concurrently, you can send out a weekly newsletter which calls out tests launching, tests running, tests recently concluded, and archived tests. All of which should point to a repository for the details. 

7. Execute the test
Driver: Engineering

Tests should run at least 2 weeks (3 is better) to ensure there aren’t anomalies determined by one particular day or week. There should be enough traffic to generate enough conversions per experience (or recipe). In high volume sites, 250 conversions per recipe is statistically significant. Others, perhaps 100 is more reasonable. If your site doesn’t get enough traffic to generate 100 conversions in 3 weeks, consider increasing the test duration to 4 or 5 weeks. 

In your testing platform, there’s a notion of “confidence level.” That means that the likelihood of repeating the test would have xx% of having the same outcome. Tests results with greater than 97% confidence are results you can move forward with. 90-97% is directional at best and below 90% you probably don’t want to consider. 

8. Monitor tests
Driver: Engineering and Strategist

Engineering needs to ensure nothing is going wrong along the way. The Strategist should keep an eye to see if there’s anything very wrong that might suggest stopping the test. If all is well, the test can conclude at the scheduled time. 

9. Ending test
Driver: Engineering and Strategist

Engineering will confirm with the Strategist that enough conversions have been met or there aren’t other business reasons to continue to run a test. 3 weeks is typically enough time, but sometimes 5 or 6 weeks is required. 

10. Reporting/Analysis
Driver: Analyst or Strategist

Depending on the structure of the team, either an Analyst will conduct the final reporting and insights or the Strategist will. Having an Analyst rather than Strategist is preferred, but sometimes there isn’t the bandwidth.

The Test Tracker will be updated by the driver to communicate it has ended and is in analysis.  Upon completion, include the lift and projected revenue or other KPIs your organization has set up. 

tracker-3.JPG

11. Communication
Driver: Strategist

It’s the responsibility of the Strategist to aggregate the test charter, test results, any insights, and next steps prior to communicating more broadly. It’s helpful to append test results to your test charter so you can have everything in one place. 

For future access or retrieval of your tests, you should house all your test charters in one place and utilize a wiki, blog, (or database if you have the means) to summarize test results and tag in ways that are meaningful to your organization.

While tests conducted a year ago or ever 6 months ago may not be applicable today, it’s worth referencing outcomes of tests past. 

12. Refine and repeat
Take what you’ve learned and feed those learnings into your next series of tests. Your testing pipeline will be endless. As you increase velocity on your testing program, you’ll refine the process to what works best with your organization. 

13. Ongoing communication
Since tests are ongoing, you need to establish modes of communication within your organization. Depending on the size and culture of your organization, here are some suggestions:

  • Newsletter (weekly): Communicate what tests are currently running, tests that are upcoming, and tests that have recently concluded (in analysis). Reference the test charters in your newsletter to mitigate questions about details. 
newsletter.JPG

 

  • Test launch (as needed): When a tests launches, it’s critical that stakeholders are aware. The worst is someone sees a test and thinks something is wrong and causes a firedrill to fix it. 

  • Test results (monthly): Holding monthly reviews where anybody can attend is a great way to tout the work the team has done and provide opportunities to answer questions.  

[ Back to top ]

Summary

Embarking on a comprehensive program can be daunting and we hope you found this Conversion Rate Optimization Guide helpful. Do you have any best practices you want to share? Provide feedback in the comments.