Coyote

CXL Scholarship Week 1 Review

Creative Hypothesis / CXL Scholarship Week 1 Review

In this week’s course study with CXL Institue, Brian discussed what Conversion Rate Optimization is–– simply in my own words CRO it’s the balance between knowing what and how much to test for each A/B test, knowing why we are testing these changes, and what knowledge we will receive after running this test, and how to determine a fully functioning hypothesis. I loved his explanation on how much UX ties into each test experience and how he referenced the importance of even pulling out UX changes as an A/B test by its self.

In this week’s course study Brian discussed what Conversion Rate Optimization is–– simply in my own words, CRO is the balance between knowing what and how much to test; knowing why we are testing the specific changes, and what knowledge we will receive after running a test to determine a fully functioning hypothesis.

It makes perfect sense how Nike’s conversion rate tanked after their rebrand shown in the course study. In this test, Nike made the mistake of changing dozens of variants at the same time in one single test. Not only did messaging and tone of copy change, but the whole brand identity shifted, photography was changed, and all above the fold content, which in my opinion, is the most important slice in any website. By doing a complete test re-haul from testing the control (old site) against the full re-brand (test A), they found the test failed, rather quickly. With the losing test, they had no idea which changes actually made it fail. Therefore, they had to start at square one and test backward to learn anything further. To determine which changes had the biggest negative impact, they would have to break out each design, photography, UX placement, and copy changes separately.

Giving a further example where this is difficult: say Nike had a “test bias” (a feeling backed-by-data) that one small portion of that mega losing test would have given a positive CR+ if pulled out separately.  They would now like to break out that single test bias in a new separate test. After the test lost, I’ve found it’s more difficult to get stockholder buy-in for your specific test bias because it was already apart of such a large losing test. In CRO, resources and time to build out an experiment are things to take into consideration as well.

I do see the benefits of breaking out experiments into smaller tests in order to best determine which changes support/ break your hypothesis. And small win’s over time can develop into an over-all greater experience because you are able to quickly iterate and implement on winners. Although, I do see the strengths at times for full test re-hauls if there is enough data to back up that decision. Taking UX as a further example. “A consumer on average takes 3-3.5 seconds to determine if they trust a company.  Especially if it’s the customer’s first touchpoint they have with the brand. One way I have seen a full test re-haul work is when we tested a successful landing page that was designed first for desktop and fully optimized the landing pages mobile experience.

We removed unnecessary spacing, created visual hierarchy, removed friction, and also added friction in some slices to fully mobile-optimize the page design. No content was changed, other than making it fit the device more strategically. This test ended up being a larger win and in my opinion benefited us most by choosing to do the full test overhaul, vs breaking out the spacing in separate phases. In hindsight, we were able to prove it made a positive impact quick and would be able to test backward if needed be. This is a great example where rules are set as guidelines but can be broken if we feel it serves the customer best.

I loved his explanation on how much UX ties into each test experiment. How he referenced the importance of pulling out UX changes as a separate A/B test by its self. UX matters. It’s huge. His emphasis on copy and how equally important messaging can play a role on testing was refreshing as well.

I fully believe that even if a test is a loser, there is still an opportunity to pull knowledge from it. I loved the emphasis on how important it is to be strategic on the amount we are testing, so we can determine how and what changes make the biggest impact.

To come up with a solid hypothesis, first clearly state the problem you are trying to solve, making sure it calls out the focus of the experiment. Then define the variables in order to spell out 1. There must be a way to prove that the hypothesis is true. 2. There must be a way to prove the hypothesis is false. The results of the hypothesis must be reproducible.  I love how the X of a hypothesis can be anything. User flow, user behavior, anything under the sun, and there is no confinement. I understand that hypothesis’ are extremely scientific, although would push to say that there is room, even in such a data-driven world as testing is, for feeling.

To know the data so well, and to understand the customer’s problem, to empathize and understand why they are having the problem they are having. And to know that they are coming to you to specifically solve that same problem…. That’s the sweet spot. Not for manipulation. Not for cost. But it’s our duty to serve them well because we fully believe we have the best product on the market. And that should be the case for any salesman/ saleswomen in any type of industry.

Managing ideas is one of my favorite lessons and I’m excited to dig in further in the next 12 weeks. Creating lists is one of the most important aspects of managing ideas and a great way to validate ideas from stakeholders. By creating a roadmap, you can easily visualize to your team and stakeholders the reasoning in which you choose to test one idea over another. Audience size and location of where your test is running will be big factors in how long it takes for a test to be pronounced a winner or loser.

Placement of where the test will live is also a critical test aspect to consider.  The larger the audience size on the page where the test will live, the faster your test will be able to reach significance. Along with Audience size, the source of where that traffic is coming from is another factor to take into consideration. Customers finding your test from a paid funnel vs. organic funnel could have different intents, which can skew a test result.

Further in the course, Brian discusses the importance as a team to not get too caught up on thinking each test you run has to be a big win. That if you are choosing beneficial tests that support your main goal and hypothesis, that small and quick wins/losses can add up and have a strong impact over time.

Very excited for next week and to keep digging in further. Loved The Nike test experiment and it gave a great example to learn from. 

Morgan