balance Quickstart: Analyzing and adjusting the bias on a simulated toy dataset¶

'balance' is a Python package that is maintained and released by the Core Data Science Tel-Aviv team in Meta. 'balance' performs and evaluates bias reduction by weighting for a broad set of experimental and observational use cases.

Although balance is written in Python, you don't need a deep Python understanding to use it. In fact, you can just use this notebook, load your data, change some variables and re-run the notebook and produce your own weights!

This quickstart demonstrates re-weighting specific simulated data, but if you have a different usecase or want more comprehensive documentation, you can check out the comprehensive balance tutorial.

Analysis¶

There are four main steps to analysis with balance:

  • load data
  • check diagnostics before adjustment
  • perform adjustment + check diagnostics
  • output results

Let's dive right in!

Example dataset¶

The following is a toy simulated dataset.

In [1]:
import warnings
warnings.filterwarnings("ignore")

from balance import load_data
INFO (2025-05-23 17:44:01,893) [__init__/<module> (line 54)]: Using balance version 0.10.0
In [2]:
target_df, sample_df = load_data()

print("target_df: \n", target_df.head())
print("sample_df: \n", sample_df.head())
target_df: 
        id gender age_group     income  happiness
0  100000   Male       45+  10.183951  61.706333
1  100001   Male       45+   6.036858  79.123670
2  100002   Male     35-44   5.226629  44.206949
3  100003    NaN       45+   5.752147  83.985716
4  100004    NaN     25-34   4.837484  49.339713
sample_df: 
   id  gender age_group     income  happiness
0  0    Male     25-34   6.428659  26.043029
1  1  Female     18-24   9.940280  66.885485
2  2    Male     18-24   2.673623  37.091922
3  3     NaN     18-24  10.550308  49.394050
4  4     NaN     18-24   2.689994  72.304208
In [3]:
target_df.head().round(2).to_dict()
# sample_df.shape
Out[3]:
{'id': {0: '100000', 1: '100001', 2: '100002', 3: '100003', 4: '100004'},
 'gender': {0: 'Male', 1: 'Male', 2: 'Male', 3: nan, 4: nan},
 'age_group': {0: '45+', 1: '45+', 2: '35-44', 3: '45+', 4: '25-34'},
 'income': {0: 10.18, 1: 6.04, 2: 5.23, 3: 5.75, 4: 4.84},
 'happiness': {0: 61.71, 1: 79.12, 2: 44.21, 3: 83.99, 4: 49.34}}

In practice, one can use pandas loading function(such as read_csv()) to import data into the DataFrame objects sample_df and target_df.

Load data into a Sample object¶

The first thing to do is to import the Sample class from balance. All of the data we're going to be working with, sample or population, will be stored in objects of the Sample class.

In [4]:
from balance import Sample

Using the Sample class, we can fill it with a "sample" we want to adjust, and also a "target" we want to adjust towards.

We turn the two input pandas DataFrame objects we created (or loaded) into a balance.Sample objects, by using the .from_frame()

In [5]:
sample = Sample.from_frame(sample_df, outcome_columns=["happiness"])
# Often times we don'y have the outcome for the target. In this case we've added it just to validate later that the weights indeed help us reduce the bias
target = Sample.from_frame(target_df, outcome_columns=["happiness"])
WARNING (2025-05-23 17:44:02,207) [util/guess_id_column (line 113)]: Guessed id column name id for the data
WARNING (2025-05-23 17:44:02,216) [sample_class/from_frame (line 261)]: No weights passed. Adding a 'weight' column and setting all values to 1
WARNING (2025-05-23 17:44:02,227) [util/guess_id_column (line 113)]: Guessed id column name id for the data
WARNING (2025-05-23 17:44:02,242) [sample_class/from_frame (line 261)]: No weights passed. Adding a 'weight' column and setting all values to 1

If we use the .df property call, we can see the DataFrame stored in sample. We can see how we have a new weight column that was added (it will all have 1s) in the importing of the DataFrames into a balance.Sample object.

In [6]:
sample.df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 6 columns):
 #   Column     Non-Null Count  Dtype  
---  ------     --------------  -----  
 0   id         1000 non-null   object 
 1   gender     912 non-null    object 
 2   age_group  1000 non-null   object 
 3   income     1000 non-null   float64
 4   happiness  1000 non-null   float64
 5   weight     1000 non-null   int64  
dtypes: float64(2), int64(1), object(3)
memory usage: 47.0+ KB

We can get a quick overview text of each Sample object, but just calling it.

Let's take a look at what this produces:

In [7]:
sample
Out[7]:
(balance.sample_class.Sample)

        balance Sample object
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
In [8]:
target
Out[8]:
(balance.sample_class.Sample)

        balance Sample object
        10000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        

Next, we combine the sample object with the target object. This is what will allow us to adjust the sample to the target.

In [9]:
sample_with_target = sample.set_target(target)

Looking on sample_with_target now, it has the target atteched:

In [10]:
sample_with_target
Out[10]:
(balance.sample_class.Sample)

        balance Sample object with target set
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
            target:
                 
	        balance Sample object
	        10000 observations x 3 variables: gender,age_group,income
	        id_column: id, weight_column: weight,
	        outcome_columns: happiness
	        
            3 common variables: gender,age_group,income
            

Pre-Adjustment Diagnostics¶

We can use .covars() and then followup with .mean() and .plot() (barplots and kde density plots) to get some basic diagnostics on what we got.

We can see how:

  • The proportion of missing values in gender is similar in sample and target.
  • We have younger people in the sample as compared to the target.
  • We have more females than males in the sample, as compared to around 50-50 split for the (non NA) target.
  • Income is more right skewed in the target as compared to the sample.
In [11]:
print(sample_with_target.covars().mean().T)
source                     self     target
_is_na_gender[T.True]  0.088000   0.089800
age_group[T.25-34]     0.300000   0.297400
age_group[T.35-44]     0.156000   0.299200
age_group[T.45+]       0.053000   0.206300
gender[Female]         0.268000   0.455100
gender[Male]           0.644000   0.455100
gender[_NA]            0.088000   0.089800
income                 6.297302  12.737608
In [12]:
print(sample_with_target.covars().asmd().T)
source                  self
age_group[T.25-34]  0.005688
age_group[T.35-44]  0.312711
age_group[T.45+]    0.378828
gender[Female]      0.375699
gender[Male]        0.379314
gender[_NA]         0.006296
income              0.494217
mean(asmd)          0.326799
In [13]:
print(sample_with_target.covars().asmd(aggregate_by_main_covar = True).T)
source          self
age_group   0.232409
gender      0.253769
income      0.494217
mean(asmd)  0.326799
In [14]:
sample_with_target.covars().plot()

Adjusting Sample to Population¶

Next, we adjust the sample to the target. The default method to be used is 'ipw' (which uses inverse probability/propensity weights, after running logistic regression with lasso regularization).

In [15]:
# Using ipw to fit survey weights
adjusted = sample_with_target.adjust()
INFO (2025-05-23 17:44:03,123) [ipw/ipw (line 399)]: Starting ipw function
INFO (2025-05-23 17:44:03,127) [adjustment/apply_transformations (line 305)]: Adding the variables: []
INFO (2025-05-23 17:44:03,128) [adjustment/apply_transformations (line 306)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2025-05-23 17:44:03,142) [adjustment/apply_transformations (line 343)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2025-05-23 17:44:03,153) [ipw/ipw (line 433)]: Building model matrix
INFO (2025-05-23 17:44:03,252) [ipw/ipw (line 455)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2025-05-23 17:44:03,253) [ipw/ipw (line 458)]: The number of columns in the model matrix: 16
INFO (2025-05-23 17:44:03,253) [ipw/ipw (line 459)]: The number of rows in the model matrix: 11000
INFO (2025-05-23 17:44:19,814) [ipw/ipw (line 578)]: Done with sklearn
INFO (2025-05-23 17:44:19,814) [ipw/ipw (line 584)]: max_de: None
INFO (2025-05-23 17:44:19,815) [ipw/ipw (line 605)]: Starting model selection
INFO (2025-05-23 17:44:19,818) [ipw/ipw (line 638)]: Chosen lambda: 0.041158338186664825
INFO (2025-05-23 17:44:19,819) [ipw/ipw (line 654)]: Proportion null deviance explained 0.17265121909892267
In [16]:
print(adjusted)
        Adjusted balance Sample object with target set using ipw
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
            target:
                 
	        balance Sample object
	        10000 observations x 3 variables: gender,age_group,income
	        id_column: id, weight_column: weight,
	        outcome_columns: happiness
	        
            3 common variables: gender,age_group,income
            

Evaluation of the Results¶

We can get a basic summary of the results:

In [17]:
print(adjusted.summary())
Covar ASMD reduction: 63.4%, design effect: 1.880
Covar ASMD (7 variables): 0.327 -> 0.119
Model performance: Model proportion deviance explained: 0.173
In [18]:
print(adjusted.covars().mean().T)
source                      self     target  unadjusted
_is_na_gender[T.True]   0.086866   0.089800    0.088000
age_group[T.25-34]      0.307309   0.297400    0.300000
age_group[T.35-44]      0.273676   0.299200    0.156000
age_group[T.45+]        0.137604   0.206300    0.053000
gender[Female]          0.406342   0.455100    0.268000
gender[Male]            0.506792   0.455100    0.644000
gender[_NA]             0.086866   0.089800    0.088000
income                 10.060502  12.737608    6.297302

We see an improvement in the average ASMD. We can look at detailed list of ASMD values per variables using the following call.

In [19]:
print(adjusted.covars().asmd().T)
source                  self  unadjusted  unadjusted - self
age_group[T.25-34]  0.021676    0.005688          -0.015988
age_group[T.35-44]  0.055738    0.312711           0.256973
age_group[T.45+]    0.169759    0.378828           0.209069
gender[Female]      0.097907    0.375699           0.277792
gender[Male]        0.103798    0.379314           0.275516
gender[_NA]         0.010260    0.006296          -0.003965
income              0.205436    0.494217           0.288781
mean(asmd)          0.119494    0.326799           0.207304

It's easier to learn about the biases by just running .covars().plot() on our adjusted object.

In [20]:
adjusted.covars().plot()  # you could change sizes using something like .plot(width = 1500, height = 700)

We can also use different plots, using the seaborn library, for example with the "kde" dist_type.

In [21]:
# This shows how we could use seaborn to plot a kernel density estimation
adjusted.covars().plot(library = "seaborn", dist_type = "kde")

Understanding the weights¶

We can look at the distribution of weights using the following call.

In [22]:
adjusted.weights().plot()

And get many summary statistics - including the design effect, effective sample size (ESS), and various quantiles and more, using:

In [23]:
# adjusted.weights().design_effect()
print(adjusted.weights().summary().round(2))
                                var       val
0                     design_effect      1.88
1       effective_sample_proportion      0.53
2             effective_sample_size    531.79
3                               sum  10000.00
4                    describe_count   1000.00
5                     describe_mean      1.00
6                      describe_std      0.94
7                      describe_min      0.30
8                      describe_25%      0.45
9                      describe_50%      0.65
10                     describe_75%      1.17
11                     describe_max     11.36
12                    prop(w < 0.1)      0.00
13                    prop(w < 0.2)      0.00
14                  prop(w < 0.333)      0.11
15                    prop(w < 0.5)      0.32
16                      prop(w < 1)      0.67
17                     prop(w >= 1)      0.33
18                     prop(w >= 2)      0.10
19                     prop(w >= 3)      0.03
20                     prop(w >= 5)      0.01
21                    prop(w >= 10)      0.00
22               nonparametric_skew      0.37
23  weighted_median_breakdown_point      0.21

Outcome analysis¶

In [24]:
# As we can see, the ci for unadjusted doesn't include the real value in the outcome, while the CI of the adjusted sample does include it.
# Also, the distance from the true value without adjustment is around 4 points, and after adjustment it's around 2 points.
print(adjusted.outcomes().summary())
1 outcomes: ['happiness']
Mean outcomes (with 95% confidence intervals):
source       self  target  unadjusted           self_ci         target_ci     unadjusted_ci
happiness  53.297  56.278      48.559  (52.097, 54.496)  (55.961, 56.595)  (47.669, 49.449)

Response rates (relative to number of respondents in sample):
   happiness
n     1000.0
%      100.0
Response rates (relative to notnull rows in the target):
    happiness
n     1000.0
%       10.0
Response rates (in the target):
    happiness
n    10000.0
%      100.0

The estimated mean happiness according to our sample is 48 without any adjustment and 54 with adjustment. The following show the distribution of happinnes:

In [25]:
adjusted.outcomes().plot()

Downloading data¶

Finally, we can prepare the data to be downloaded for future analyses.

In [26]:
adjusted.to_download()
Out[26]:
Click here to download: /tmp/tmp_balance_out_0e641fa7-7850-42de-8302-a63b89244db4.csv
In [27]:
# We can prepare the data to be exported as csv - showing the first 500 charaacters for simplicity:
adjusted.to_csv()[0:500]
Out[27]:
'id,gender,age_group,income,happiness,weight\n0,Male,25-34,6.428659499046228,26.043028759747298,6.52832077256206\n1,Female,18-24,9.940280228116047,66.88548460632677,9.615962486362896\n2,Male,18-24,2.6736231547518043,37.091921916683006,3.5613441674585165\n3,,18-24,10.550307519418066,49.39405003271002,6.958765976140972\n4,,18-24,2.689993854299385,72.30420755038209,5.1335477016020254\n5,,35-44,5.995497722733131,57.28281646341816,16.44496201550232\n6,,18-24,12.63469573898972,31.663293445944596,8.19816512783'
In [28]:
# Sessions info
import session_info
session_info.show(html=False, dependencies=True)
-----
balance             0.10.0
pandas              2.0.3
session_info        v1.0.1
-----
PIL                         11.2.1
anyio                       NA
arrow                       1.3.0
asttokens                   NA
attr                        25.3.0
attrs                       25.3.0
babel                       2.17.0
certifi                     2025.04.26
charset_normalizer          3.4.2
comm                        0.2.2
cycler                      0.12.1
cython_runtime              NA
dateutil                    2.9.0.post0
debugpy                     1.8.14
decorator                   5.2.1
defusedxml                  0.7.1
exceptiongroup              1.3.0
executing                   2.2.0
fastjsonschema              NA
fqdn                        NA
idna                        3.10
importlib_metadata          NA
importlib_resources         NA
ipfn                        NA
ipykernel                   6.29.5
isoduration                 NA
jedi                        0.19.2
jinja2                      3.1.6
joblib                      1.5.1
json5                       0.12.0
jsonpointer                 3.0.0
jsonschema                  4.23.0
jsonschema_specifications   NA
jupyter_events              0.12.0
jupyter_server              2.16.0
jupyterlab_server           2.27.3
kiwisolver                  1.4.7
markupsafe                  3.0.2
matplotlib                  3.9.4
matplotlib_inline           0.1.7
mpl_toolkits                NA
narwhals                    1.40.0
nbformat                    5.10.4
numpy                       1.26.4
overrides                   NA
packaging                   25.0
parso                       0.8.4
patsy                       1.0.1
pexpect                     4.9.0
platformdirs                4.3.8
plotly                      6.1.1
prometheus_client           NA
prompt_toolkit              3.0.51
psutil                      7.0.0
ptyprocess                  0.7.0
pure_eval                   0.2.3
pydev_ipython               NA
pydevconsole                NA
pydevd                      3.2.3
pydevd_file_utils           NA
pydevd_plugins              NA
pydevd_tracing              NA
pygments                    2.19.1
pyparsing                   3.2.3
pythonjsonlogger            NA
pytz                        2025.2
referencing                 NA
requests                    2.32.3
rfc3339_validator           0.1.4
rfc3986_validator           0.1.1
rpds                        NA
scipy                       1.10.1
seaborn                     0.13.2
send2trash                  NA
six                         1.17.0
sklearn                     1.2.2
sniffio                     1.3.1
sphinxcontrib               NA
stack_data                  0.6.3
statsmodels                 0.14.4
threadpoolctl               3.6.0
tornado                     6.5.1
traitlets                   5.14.3
typing_extensions           NA
uri_template                NA
urllib3                     2.4.0
wcwidth                     0.2.13
webcolors                   NA
websocket                   1.8.0
yaml                        6.0.2
zipp                        NA
zmq                         26.4.0
zoneinfo                    NA
-----
IPython             8.18.1
jupyter_client      8.6.3
jupyter_core        5.7.2
jupyterlab          4.4.2
notebook            7.4.2
-----
Python 3.9.22 (main, Apr  8 2025, 21:45:32) [GCC 13.3.0]
Linux-6.11.0-1014-azure-x86_64-with-glibc2.39
-----
Session information updated at 2025-05-23 17:44