Results according to the Bayesian method

What the bayesian method promises you

By using the bayesian method of reporting, you are trusting a deductive solution of data analysis. Results are generated faster than with the classical method, and are just as safe, which is ideal for users with low traffic or urgent tests to come up with solutions to their A/B comparisons.

This method combines the actual data generated by the test with the a priori knowledge coming from prior studies or experts opinion. It provides an a posteriori information such as the estimation of future values for the conversion rate and the improvement rate. This anticipation of tendencies is called « Forecast » on the reporting page.

Access to your bayesian results 

When you click on « SEE RESULTS » from the Dashboard, you access to the classical reporting page by default.

 

The « b » button is placed between the other tools on top of your result page. It allows you to access to the results generated by the bayesian statistics.

When the background of the button is white, it means that you are on the classical reporting page.

 

When the button looks like it has been pushed, it means that you are on the Bayesian reporting page.

 

To go from one state to the other, you just have to click on the button.

The color of the « b » button corresponds to the level of reliableness given to the baysian results. Before even consulting the page, you are able to know whether the results of your tests can be exploited or not.

Warning ! It is not possible to access to the Bayesian reporting page in the following cases :

  • 100% of your traffic is deviated on the original
  • The number of visitors of your test is equal to 0

Functioning of the reporting page 

Structure of the Bayesian reporting page is quite similar to the classical reporting page

However, some elements differ from it.

New indicators appear such as the probability to beat the orginal, the reliability of results according to Bayes, and the idea of forecast.

Several graphs will disappear on this page, and the conversion rate alone will be displayed.

A few definitions ... 

Probability to beat the original : is the probability that a variation has to beat the original page with a higher conversion rate on a given objective.

In the case where the traffic allocated to the original is 0%, the variations, thus sharing 100% of the traffic, do not compete with the original anymore. We will then talk about the « Probability to be the winning variation ».

Reliability of results according to Bayes : this corresponds to the trust rate. This rate is calculated on a 5 levels scale, which is very easy to interpret thanks to guiding legends appearing right under the graduations. Results are completely reliable as soon as the 5 levels have been reached. Please note that to prevent from any change in the tendency, you should not exploit your results before you reached a reliability level of 4 out of 5 at least.

Note that the reliability of results according to Bayes is also the indicator used on the « b » button to inform of the exploitability of results.

The green color means that level 5 has been reached by at least one variation.

The orange color means that level 4 at best has been reached by at least one variation.

And the red color indicates that your results are not ready to be exploited yet.

Forecast

Forecast is the anticipation of a value to which will converge the results. This data is only available for both conversion rate and improvement rate.

Forecasting a future value requires to collect a certain amount of data, so this indicator may not appear the first days after the launching of the test.

The forecasts appear directly on the table board, right under the actual values of the conversion rate and improvement rate.

The value forecast also appears in the conversion rate graph, it is located in the grey area on the right. It is symbolized by a circle, and takes the color of the variation it represents.

 

Use the dotted horizontal lines to compare the actual evolution of your conversion rate with its future value.

My results are very different, is it normal? 

Both statistical methods lead to equivalent results, but they do not insure a perfect similarity between the two. It is thus normal that you happen to observe differences between some rates.

In some cases, two different variations can be announced winners on a same test.

Make sure that the trust rate is at its maximum in both methods before comparing the two data. If they are and that the doubt remains, then we advise you to use the results of the classical method.  

Have more questions? Submit a request
Powered by Zendesk