LogoLogo
HomeThe PlatformBlogSchedule a demo
  • Getting Started
    • Welcome to AdLibertas
  • The Platform
    • How it works
    • User-Level Audience Reporting
      • Creating Reports
        • Creating a New User Report
        • Creating Advanced User-Level Reports
        • Advanced Audience Builder
        • Custom Event Metrics
      • Report Layout
        • Report Module: Audience Filtering
        • Chart Type Module: Absolute vs. Relative Reports
        • Daily Totals, Per User, Cumulative Totals
        • Lifecycle Reports
        • Forecasting Module
        • Statistics Module
        • Measuring Confidence
      • Advanced Reporting Methods
        • User Measurement & Calculation Details
        • Date Ranges: Define Audience vs. Create Report
        • Exclude GAID tracking opt-outs
        • Scheduled Reports: Keep Updated & Rolling
        • Reporting on a Firebase AB test
        • Understanding “Audience Restraints”
        • Adding user time to your reports
    • Consolidated Revenue Reporting
      • Reporting Discrepancies
      • Reporting Availability & Timezones
      • Ad Network Re-Repost; Also: Revenue Reconciliation Accuracy
      • Consolidated Reporting vs. Consolidated Inventory Reporting
      • Reporting Table – Column Descriptions Common Metrics (Calculated Fields)
      • Facebook Reporting
      • Consolidated Ad Revenue with multiple mediators
    • Business Analytics
      • Analytics Layout
      • Understanding the "Explore Data" button
      • The Data Table
      • Asking a Question
      • Saving a Question
      • Creating a custom dimension
      • Setting up a pulse
    • Custom Dashboards
      • Custom Dashboard Filters
      • Combining data into a single chart
    • Direct SQL Access
    • Exporting Data
      • Ad Network Reports
      • Chart Reports
      • Custom API connections
      • Downloading & Scheduling Data Reports
      • Deprecated: Line Item Change Log
    • General
      • Change your Username & Password
      • Adding Users to your Account
      • Sharing Collaborative Links
      • AdLibertas Cost
  • Data Integrations
    • Connecting in 3 steps
    • Ad Impression-Level Revenue Connections
      • AppLovin Max User Revenue API
      • ironSource Ad Revenue Measurement Integration
      • Impression level tracking with Admob Mediation
      • Collecting MoPub Impression-Level Data as a Firebase Event
    • Ad Network & Store Connections
      • Adding Ad Network Credentials
      • How does App Store Reporting work?
      • Adding access to Google Play
      • Adding Sub User to App Store Connect
      • Getting the most from Ad Network Reports
    • Analytics Connections
      • Data Set Status
      • Connect AdLibertas to Firebase
      • Connecting AdLibertas to BigQuery
      • Firebase Install Counts in Audience Reporting
      • Setting User Campaigns in Firebase
      • Why use revenue to determine Firebase AB test winners?
      • Firebase Best Practices: keeping Google BigQuery Costs Down
    • Custom Integrations
      • Sending Events via Webhooks to AdLibertas
      • Impression level tracking with Admob Mediation
      • Connecting AdLibertas to BigQuery
      • Importing a custom data set
    • IAP Connections
      • Tracking IAP & Subscriptions in Firebase and BigQuery
      • RevenueCat Integration: WebHooks
      • RevenueCat: Setting Universal Identifiers
    • MMP Connections
      • Connecting Adjust
      • Connecting AppsFlyer
      • Connecting Kochava
  • FAQs
    • General
      • Why does AdLibertas need credentials?
    • Audience Reporting
      • Why doesn't my daily active user count match Firebase?
      • Why doesn’t my retention rate match?
      • Why aren't my install rates matching?
      • Why doesn't my relative user count match retention?
      • What is the probability projected LTV becomes actual LTV?
      • Why doesn’t Firebase and AdLibertas revenue match?
    • Reporting
      • What is “non_mopub” revenue
      • How do customers use AdLibertas?
  • Privacy & Security
    • Privacy & Security Details
Powered by GitBook
On this page
  • Why is the statistics module important?
  • Audience Representation
  • Measuring Confidence
  • 1st and 2nd Standard Cohort Deviation (SD)
  • Standard Cohort Error (SE)
  • Statistics on Forecasted Values
  1. The Platform
  2. User-Level Audience Reporting
  3. Report Layout

Statistics Module

Tools to help you with analyzing and interpreting the uncertainty and variation in your reports.

PreviousForecasting ModuleNextMeasuring Confidence

Last updated 2 years ago

Why is the statistics module important?

Very often when dealing with large datasets, you'll want to understand the certainty and variation in your reports. That is, uncover how confident you can be that your measurements are an accurate measurement of the entire population, and how repeatable your findings will be. We aim to provide high-level statistical outcomes, while still being approachable for non-statisticians.

Audience Representation

Every report has a section in the table listed Audience Representation. This shows the percentage of users compared to the total audience size. In the table summary view, this is returned against the average daily active users, when the data table is expanded you will see daily audience representation.

For example, if your audience has 100,000 users and the audience representation on a given day is 1%, that means you have 1,000 active users. The purpose of this measurement is to ensure you're not unknowingly making assumptions on the performance of a large user group with a very small number of users.

Measuring Confidence

1st and 2nd Standard Cohort Deviation (SD)

1st std dev shading incorporates 68% of users; 2nd std dev shading incorporates 95% of users in an audience.

The 1st standard cohort deviation shows the dispersion of 68% of individual cohort LTVs closest to the mean. The 2nd standard cohort deviation shows the dispersion of 95% of individual cohort LTVs closest to the mean. Meaning, the shaded area represents 68% or 95% respectively of the users that fall closest to the mean.

Standard Cohort Error (SE)

The standard cohort error of the mean is a measure of the variability of daily cohort means around the population (displayed) mean.

What's the difference between the SD & SE?

Statistics on Forecasted Values

Confidence measures the percentage a random sample of your audience will show as, or greater difference than the mean of the set baseline. You can read more about how to set and use confidence in the section.

Deviations are generally used to measure the amount of performance variability of your users as compared to the mean. High deviations mean audience values are generally far from the mean, while low deviations mean user values are clustered close to the mean. You can read more about how deviations

Since each user cohort's average (day of install) can differ from the overall mean, the standard error will show how much the cohort means differ from the population mean. The purpose of the standard cohort error of the mean is a way to know how close the average of cohort samples is to the average of the whole group. It is a way of knowing how precise the overall LTV average is in relation to an individual cohort's LTV. The smaller the standard error, the more representative a random day's LTV will be of the overall population. Conversely, a large standard error indicates less representation of an individual cohort's LTV to the population mean. For more information on standard error,

The standard deviation (SD) measures the amount of variability from the individual cohorts to the mean, while the standard error of the mean (SE) measures how far a sample mean of the data is likely to be from the mean. The SE is always smaller than the SD.

We also allow you to apply and visualize statistics on forecasted LTVs. We apply the statistics calculation on the population mean then use the . The idea is to show the projected dispersion of user values based on user value.

Measuring Confidence
can help with analysis here.
please see this article.
Source
projection model chosen in the Forecasting Module
Showing the 1st standard deviation for user LTVs