# Measuring Confidence

{% hint style="info" %}
Confidence measures the percentage a random sample of your test audience will show as, or greater difference than the mean of the baseline.
{% endhint %}

When you set a baseline in your report, AdLibertas will calculate the statistical significance *(p-value)* of your comparison audiences. The confidence (1 – *p*) percentage returned is the likelihood a random sample will show as, or greater difference than the mean.

### **How to include confidence in AdLibertas reporting**

When creating a [<mark style="color:blue;">report</mark> ](https://docs.adlibertas.com/the-platform/user-level-audience-reporting/creating-reports/creating-a-new-user-report)with more than one [<mark style="color:blue;">audience</mark>](https://docs.adlibertas.com/the-platform/user-level-audience-reporting/creating-reports/advanced-audience-builder)<mark style="color:blue;">,</mark> you’ll be able to select a baseline. That baseline is used as the calculation for your comparison. All other audiences or variants will be compared to the baseline.

![](https://336314087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaO8orL32xNzJW4MhnNef%2Fuploads%2F5zwFy2ZiAsW7tmOh5JDw%2FScreen%20Shot%202022-12-28%20at%2011.40.46%20AM.png?alt=media\&token=3dbc9e5d-324e-4704-b623-47dc43f2f045)

**How confidence is calculated.**

[<mark style="color:blue;">Statistical significance</mark>](https://www.investopedia.com/terms/s/statistically_significant.asp) is determined by calculating the [<mark style="color:blue;">p-value</mark>](https://www.investopedia.com/terms/p/p-value.asp) of a [<mark style="color:blue;">one-tailed test</mark>](https://www.investopedia.com/terms/o/one-tailed-test.asp). [<mark style="color:blue;">Confidence</mark> ](https://www.investopedia.com/terms/c/confidenceinterval.asp)is calculated as (1—*p*-value).

### **Understanding the outcome.**

Confidence returns the likelihood a randomly selected sample of users, one user from A and one from B, will have an outcome as, or greater, than the displayed mean.

![](https://336314087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaO8orL32xNzJW4MhnNef%2Fuploads%2FUZPZXH3LUWiuRooGNC4a%2FScreen%20Shot%202022-12-27%20at%2010.43.28%20AM.png?alt=media\&token=216f4256-feaa-48b4-b6c8-2551158a412d)

### **Best practices for using confidence in your reporting**

Most scientists and statisticians often strive for [<mark style="color:blue;">95%+ confidence levels</mark>](https://www.investopedia.com/terms/c/confidenceinterval.asp) in experiments but you’ll find a range that works for your purposes. An AB test with <50% confidence isn’t necessarily a failure, it is just less than half of the users in the experiment will fall closer together than the mean value.

Also – like mean values– a small number of users will distort your confidence levels, so be sure to keep in mind the number of users at the end of the experiment.

Looking for more tips, read our article on [<mark style="color:blue;">setting up an effective framework for AB tests.</mark>](https://www.adlibertas.com/the-correct-framework-for-ab-testing-your-mobile-app/)
