CXL Scholarship — 9th Week Review

Mohammad Sammak
10 min readNov 1, 2020

--

This is the ninth week that I have been through this great program at CXL and god knows how much great things I have learned so far. It is a lifelong chance that has been given to me and it is awesome, especially during Covid19 pandemic. Let’s start to see what has been planned for me to learn this week.

18- A/B Testing Mastery by Ton Wesseling

  • I’m Eager for more practical knowledge. Will this lesson give it to me? Let’s see.
  • The tutor seems very literate and experienced about the topic. He is a subject expert in this topic and I hope to have a good experience in this course.
  • He talked about the time periods that A/B testing has evolved. From the beginning of the internet, the early web browsers, the introduction of cookies, google optimize, the start of VWO and Optimizely, and all of the other enhancements in this field. It was really good to know about the background of the field that I am in love with.
  • Then Ton spoke about the migration from client-side tagging to server-side tagging in the near future.
  • He has more than a decade of experience in conversion optimization and A/B testing in particular. Then who else is a better option to learn this subject from? He seems as valuable as the great Peep Laja in the first sight.
  • A/B testing helps you make better and trustworthy decisions. The tutor talked about the way the health industry works and showed me a pyramid that had different levels of certainty.
  • In the health industry, there is a thing called RCT (Randomized Controlled Test) which is quite similar to A/B testing. The main goal of RCT is to verify that a hypothesis is making results and those results are repeatable.
  • When do we need to use A/B testing? Actually we need to use it in countless situations. We might be using it just for using it, for finding out what elements are neutral to the user and finding out what elements are the key values of our web pages.
  • Wow, the guy was wearing a blue blazer and he just took it off! He is now just himself and we are going to see a better story being told.
  • Ton introduced a framework that he has personally invented. It is named ROAR which has four main stages. The first two parts are risk and optimization.
  • In the first stage, you need at least 1000 conversions per month to run the A/B test and the minimum conversions per month is 10000.
  • You know what? The same things that Giorgi was trying to teach me in the previous lesson, now I am learning from Ton.
  • Now I know what is false positive and what is false negative. False-positive is what isn’t the winner in the real world, but your measurements announce it as a winner.
  • The false negative is what that is the winner in the real world, but your measurements tell you it is the actual loser. Are you getting what I am getting too? Great.
  • It is getting interesting. The Guy just talked about the things you can and should measure in your A/B tests. He really advised against measuring clicks and instead, told to go for behavior and transactions change.
  • He also talked about something called OEC that I didn’t get that much. The thing is that you should not just measure for the sake of measurement itself. You should aim for measuring the impact, the real impact, and nothing except it.
  • The next one was a big one. It mainly covered a theory called the 6V theory. As I understood, this model is designed to guide you in the way of optimizing your tests.
  • It was almost based on theoretical stuff, but it had sections like value, versus, verify, view, validate, and voice. It was good-to-know stuff. But I don’t know what kind of applications can someone consider for this stuff.
  • And again, we heard the name of B.J Fogg and Daniel Kahneman. Looking forward to seeing more useful lessons.
  • The next part was about making a hypothesis. It is good for you and your team to be aligned on something. You have to know where you want to go and a hypothesis will give you a clue.
  • A hypothesis is mainly based on self-efficacy, or at least that is what I think Ton said. Self-efficacy is the thing that you think you can’t do, but once you think you can do, you will actually do it.
  • You know you want to make things better after the A/B test is done. You need to have some hypothesizes and run your test based on them.
  • He also gave a formula for making a hypothesis. You can simply point out the current situation, the outcome, the means, and one other thing that I can’t recall now.
  • So again, I went through a bunch of prioritization terms and methods. Again, you have to know about PIE, ICE, and a new one that Ton says he and his team have made. It is named PIPE. They all focus on similar things.
  • Ton says these all can make an impact if the hypothesis, place, and chance of making an effect and relevant and considerable.
  • He used a new spreadsheet file and shared it with the audience. I am really sick of all of these spreadsheets teachers share with their students. Suppose you don’t have them. What will you do then?
  • Now is the time to start designing, developing. The Ton guy says you have to make sure not to use more than one variant (challenger) versus the control version. He says this kind of practice messes everything up!
  • He then talked about developing techniques. It was said that you should never use a WYSIWYG editor. I don’t know exactly why, but he said this
  • He then talked a little bit about server-side vs client-side tagging and the pros and cons of each one. It seems that the trend is server-side.
  • And after all of these small and big steps, you need to be sure about the quality of your tests. He specifically talked about QA. Is it a good practice to check the A/B test on every device? Not for sure, nobody has that time and resources.
  • The next part was mainly focused on Google Optimize which I really like. It was a simple implementation course on how to use this great tool.
  • Ton said (as expected) that you should never modify a test while it is being measured and processed. It will skew the data and is a really bad practice.
  • Pre-test and post-test were great things that I didn’t know about. One is for the time that your users don’t have your cookie on their devices and the other addresses the users who have already visited your website (and have the cookie).
  • When are you allowed to stop a test? It is once again related to all of those p-value and statistical power terms that we ran through and I never got them entirely.
  • And once you think your test is done and the results can be used, don’t try to be more suspicious. It is done and you shouldn’t last it any longer.
  • Ton also talked about CUPID and Bayesian methods to shorten the test period. Those methods looked to be so mathematical and I didn’t understand them!
  • Now you are running your experiments. You have to constantly check if everything is going on as planned. Is the page working as you want? Are you having an impact on income? Are you losing or gaining money? Has anyone accidentally changed the page or pages you were testing on? And a lot of other things you should be wary about.
  • When you are measuring the results, beware to measure the users, not the sessions nor page views.
  • You need the separate users who have paid you from the ones who haven’t paid anything.
  • Sampling the data must be avoided. This data needs to be looked at holistically and sampling might ruin the whole experiment.
  • Then it was talked about how much data from these experiments should you publish. Who should be aware of what in your organization and what kind of access should be granted to specific people?
  • And you know what? This course too is becoming boring because it is also talking about boring stuff that I personally don’t like.
  • Another lesson on measuring new things that weren’t understandable to me. New terms like FDR, TDR and Type M errors. You know what, I seriously want a teacher who can speak English without a distinctive accent. I am not being mean at all, I just want to say that focusing on the accent distracts my attention.
  • There were one or two formulas that needed to be understood. But Ton introduced an online calculator.
  • I liked the lesson that Ton spoke about scaling up experiments. He said that conversion rate optimization shouldn’t be just a part of a company. But instead, every team and every member of the company sounds to have this spirit of experimenting and acting based on data. How good is that future?
  • You can either work more effectively or more efficiently. In one of them, you are going to increase the quantity and in the other, the quality is what you are after.

19- Advanced Experimentation Analysis by Chad Sanderson

  • I simply can’t believe what I am hearing. The new teacher is talking about R programming language right off the bat! What is going on? What has happened to the world?
  • Am I going to be forced to learn this programming language for the sake of the course alone?
  • I tell you something and you might not believe me: it is really a course focused on R programming language. Now I have downloaded both R and RStudio to practice all of the experiments. Who knows? Maybe it is actually beneficial!
Chad Sanderson
  • Chad started talking about what he means when he says we need to work independently of the tools. He literally started building the metrics he needed in R Studio
  • He also introduced a concept called metric hierarchy. The most important metric in the hierarchy is called the north star. It kind of depicts the money-making metric for the organization. For example, he pointed out to Revenue Per Visitor (RPV).
  • The tier 2 metrics are the ones that are less important, but still valuable to the company. For example, Average Order Value (AOV) and Conversion Rate (CR) are these kinds of metrics.
  • Finally, we have a tier 3 metrics that are almost always misunderstood and falsely taken as important metrics. I am referring to things like page view of the number of clicks.
  • This guy is a nerd. He uses the R studio like he is eating water! I just saw staples of t.tests and prop.tests in R. It is obvious that Chad doesn’t want to teach us the R programming language itself, butter rather show us how this language can replace the tool that you feel to be dependent on.
  • You know what? This p-value isn’t going to leave me alone! Wherever I go, it is coming with me there! I have to deal with it anaway!
  • But it is actually great to see how our beloved tools calculate numbers. The R studio shows us how everything is being measured in real-time.
  • I liked this ggplot2 thing. It is very fun to mess with.
  • I now know that the A/B test is popularized from the ancient RCT (Randomized Controlled Trials) that scientists have done for decades. It is heavily based on randomization.
  • This randomization thing is going to be vital in the whole A/B testing and I know it because Chad has dedicated a specific lesson to it.
  • Every time you run a test and get a significant result, you have to think about two separate hypotheses. You are either making a false positive error or the result is actually that significant. Chad inspected all of these things in R very masterfully.
  • I can’t talk about any functions of R, because simply I am just getting started with it. It is just an introduction and everybody knows it.
  • One great thing: R can simply input and use CSV, XLS, JSON and TXT files. Just like a real visualization tool. Don’t you donut that it was originally built with visualization as its main feature in mind?
  • The next lesson was about p-value and statistical significance. Chad talked a lot about it and how to measure it in R studio. Good for him that he can work with this tool so well.
  • I am getting what is being done entirely, but I can at least try to understand key points being discussed. That is what I can do.
  • Lesson 6 was something like other lessons to me. I’m just trying to make sense out of the lessons and understand what is actually going on.
  • It was full of regressions and statistical measures as usual. There was a term called f-tests that was focused on variance.
  • Chad is teaching his lessons and I’m thinking about the things that I can do for the company that I am working for. Not that these new materials aren’t useful and interesting to me, but I think I need to use the insights that I am getting from here in real-life scenarios.
  • I know that correlation and causation are two different things, but I should know it to the extent that I don’t make mistakes of this kind in the future.
  • Chad closed the course with some reiterations. He talked about almost every function and every trick he used during this course and tried to make them stick in my mind.

Closing Thoughts

I think this was a great week, especially because of the final course I’ve been through. It was a topic that was entirely new to me and gave me motivation for thinking about data more seriously. Of course other parts of this week were also great. I am so thankful for having the chance to learn about them.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Mohammad Sammak
Mohammad Sammak

Written by Mohammad Sammak

A marketer who tries to act based on data and never stops learning.

No responses yet

Write a response