balance Quickstart: Analyzing and adjusting the bias on a simulated toy dataset¶

'balance' is a Python package that is maintained and released by the Core Data Science Tel-Aviv team in Meta. 'balance' performs and evaluates bias reduction by weighting for a broad set of experimental and observational use cases.

Although balance is written in Python, you don't need a deep Python understanding to use it. In fact, you can just use this notebook, load your data, change some variables and re-run the notebook and produce your own weights!

This quickstart demonstrates re-weighting specific simulated data, but if you have a different usecase or want more comprehensive documentation, you can check out the comprehensive balance tutorial.

Analysis¶

There are four main steps to analysis with balance:

  • load data
  • check diagnostics before adjustment
  • perform adjustment + check diagnostics
  • output results

Let's dive right in!

Example dataset¶

The following is a toy simulated dataset.

In [1]:
%matplotlib inline

import plotly.offline as offline
offline.init_notebook_mode()

import warnings
warnings.filterwarnings("ignore")

from balance import load_data
INFO (2025-12-25 09:18:13,437) [__init__/<module> (line 72)]: Using balance version 0.14.0
balance (Version 0.14.0) loaded:
    📖 Documentation: https://import-balance.org/
    🛠️ Help / Issues: https://github.com/facebookresearch/balance/issues/
    📄 Citation:
        Sarig, T., Galili, T., & Eilat, R. (2023).
        balance - a Python package for balancing biased data samples.
        https://arxiv.org/abs/2307.06024

    Tip: You can view this message anytime with balance.help()

In [2]:
target_df, sample_df = load_data()

print("target_df: \n", target_df.head())
print("sample_df: \n", sample_df.head())
target_df: 
        id gender age_group     income  happiness
0  100000   Male       45+  10.183951  61.706333
1  100001   Male       45+   6.036858  79.123670
2  100002   Male     35-44   5.226629  44.206949
3  100003    NaN       45+   5.752147  83.985716
4  100004    NaN     25-34   4.837484  49.339713
sample_df: 
   id  gender age_group     income  happiness
0  0    Male     25-34   6.428659  26.043029
1  1  Female     18-24   9.940280  66.885485
2  2    Male     18-24   2.673623  37.091922
3  3     NaN     18-24  10.550308  49.394050
4  4     NaN     18-24   2.689994  72.304208
In [3]:
target_df.head().round(2).to_dict()
# sample_df.shape
Out[3]:
{'id': {0: '100000', 1: '100001', 2: '100002', 3: '100003', 4: '100004'},
 'gender': {0: 'Male', 1: 'Male', 2: 'Male', 3: nan, 4: nan},
 'age_group': {0: '45+', 1: '45+', 2: '35-44', 3: '45+', 4: '25-34'},
 'income': {0: 10.18, 1: 6.04, 2: 5.23, 3: 5.75, 4: 4.84},
 'happiness': {0: 61.71, 1: 79.12, 2: 44.21, 3: 83.99, 4: 49.34}}

In practice, one can use pandas loading function(such as read_csv()) to import data into the DataFrame objects sample_df and target_df.

Load data into a Sample object¶

The first thing to do is to import the Sample class from balance. All of the data we're going to be working with, sample or population, will be stored in objects of the Sample class.

In [4]:
from balance import Sample

Using the Sample class, we can fill it with a "sample" we want to adjust, and also a "target" we want to adjust towards.

We turn the two input pandas DataFrame objects we created (or loaded) into a balance.Sample objects, by using the .from_frame()

In [5]:
sample = Sample.from_frame(sample_df, outcome_columns=["happiness"])
# Often times we don'y have the outcome for the target. In this case we've added it just to validate later that the weights indeed help us reduce the bias
target = Sample.from_frame(target_df, outcome_columns=["happiness"])
WARNING (2025-12-25 09:18:13,698) [util/guess_id_column (line 346)]: Guessed id column name id for the data
WARNING (2025-12-25 09:18:13,706) [sample_class/from_frame (line 508)]: No weights passed. Adding a 'weight' column and setting all values to 1
WARNING (2025-12-25 09:18:13,714) [util/guess_id_column (line 346)]: Guessed id column name id for the data
WARNING (2025-12-25 09:18:13,728) [sample_class/from_frame (line 508)]: No weights passed. Adding a 'weight' column and setting all values to 1

If we use the .df property call, we can see the DataFrame stored in sample. We can see how we have a new weight column that was added (it will all have 1s) in the importing of the DataFrames into a balance.Sample object.

In [6]:
sample.df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 6 columns):
 #   Column     Non-Null Count  Dtype  
---  ------     --------------  -----  
 0   id         1000 non-null   object 
 1   gender     912 non-null    object 
 2   age_group  1000 non-null   object 
 3   income     1000 non-null   float64
 4   happiness  1000 non-null   float64
 5   weight     1000 non-null   float64
dtypes: float64(3), object(3)
memory usage: 47.0+ KB

We can get a quick overview text of each Sample object, but just calling it.

Let's take a look at what this produces:

In [7]:
sample
Out[7]:
(balance.sample_class.Sample)

        balance Sample object
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
In [8]:
target
Out[8]:
(balance.sample_class.Sample)

        balance Sample object
        10000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        

Next, we combine the sample object with the target object. This is what will allow us to adjust the sample to the target.

In [9]:
sample_with_target = sample.set_target(target)

Looking on sample_with_target now, it has the target atteched:

In [10]:
sample_with_target
Out[10]:
(balance.sample_class.Sample)

        balance Sample object with target set
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
            target:
                 
	        balance Sample object
	        10000 observations x 3 variables: gender,age_group,income
	        id_column: id, weight_column: weight,
	        outcome_columns: happiness
	        
            3 common variables: gender,age_group,income
            

Pre-Adjustment Diagnostics¶

We can use .covars() and then followup with .mean() and .plot() (barplots and kde density plots) to get some basic diagnostics on what we got.

We can see how:

  • The proportion of missing values in gender is similar in sample and target.
  • We have younger people in the sample as compared to the target.
  • We have more females than males in the sample, as compared to around 50-50 split for the (non NA) target.
  • Income is more right skewed in the target as compared to the sample.
In [11]:
print(sample_with_target.covars().mean().T)
source                     self     target
_is_na_gender[T.True]  0.088000   0.089800
age_group[T.25-34]     0.300000   0.297400
age_group[T.35-44]     0.156000   0.299200
age_group[T.45+]       0.053000   0.206300
gender[Female]         0.268000   0.455100
gender[Male]           0.644000   0.455100
gender[_NA]            0.088000   0.089800
income                 6.297302  12.737608
In [12]:
print(sample_with_target.covars().asmd().T)
source                  self
age_group[T.25-34]  0.005688
age_group[T.35-44]  0.312711
age_group[T.45+]    0.378828
gender[Female]      0.375699
gender[Male]        0.379314
gender[_NA]         0.006296
income              0.494217
mean(asmd)          0.326799

In [13]:
print(sample_with_target.covars().asmd(aggregate_by_main_covar = True).T)
source          self
age_group   0.232409
gender      0.253769
income      0.494217
mean(asmd)  0.326799
In [14]:
sample_with_target.covars().plot()

Adjusting Sample to Population¶

Next, we adjust the sample to the target. The default method to be used is 'ipw' (which uses inverse probability/propensity weights, after running logistic regression with lasso regularization).

In [15]:
# Using ipw to fit survey weights
adjusted = sample_with_target.adjust()
INFO (2025-12-25 09:18:14,419) [ipw/ipw (line 622)]: Starting ipw function
INFO (2025-12-25 09:18:14,422) [adjustment/apply_transformations (line 469)]: Adding the variables: []
INFO (2025-12-25 09:18:14,423) [adjustment/apply_transformations (line 470)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:14,438) [adjustment/apply_transformations (line 507)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:14,449) [ipw/ipw (line 656)]: Building model matrix
INFO (2025-12-25 09:18:14,545) [ipw/ipw (line 678)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2025-12-25 09:18:14,545) [ipw/ipw (line 681)]: The number of columns in the model matrix: 16
INFO (2025-12-25 09:18:14,546) [ipw/ipw (line 682)]: The number of rows in the model matrix: 11000
INFO (2025-12-25 09:18:31,291) [ipw/ipw (line 843)]: Done with sklearn
INFO (2025-12-25 09:18:31,292) [ipw/ipw (line 845)]: max_de: None
INFO (2025-12-25 09:18:31,293) [ipw/ipw (line 867)]: Starting model selection
INFO (2025-12-25 09:18:31,296) [ipw/ipw (line 900)]: Chosen lambda: 0.041158338186664825
INFO (2025-12-25 09:18:31,297) [ipw/ipw (line 918)]: Proportion null deviance explained 0.17265121909892267
In [16]:
print(adjusted)
        Adjusted balance Sample object with target set using ipw
        1000 observations x 3 variables: gender,age_group,income
        id_column: id, weight_column: weight,
        outcome_columns: happiness
        
        adjustment details:
            method: ipw
            weight trimming mean ratio: 20
            design effect (Deff): 1.880
            effective sample size proportion (ESSP): 0.532
            effective sample size (ESS): 531.8
                
            target:
                 
	        balance Sample object
	        10000 observations x 3 variables: gender,age_group,income
	        id_column: id, weight_column: weight,
	        outcome_columns: happiness
	        
            3 common variables: gender,age_group,income
            

Evaluation of the Results¶

We can get a basic summary of the results:

In [17]:
print(adjusted.summary())
Adjustment details:
    method: ipw
    weight trimming mean ratio: 20
Covariate diagnostics:
    Covar ASMD reduction: 63.4%
    Covar ASMD (7 variables): 0.327 -> 0.119
    Covar mean KLD reduction: 95.3%
    Covar mean KLD (3 variables): 0.071 -> 0.003
Weight diagnostics:
    design effect (Deff): 1.880
    effective sample size proportion (ESSP): 0.532
    effective sample size (ESS): 531.8
Outcome weighted means:
            happiness
source               
self           53.297
target         56.278
unadjusted     48.559
Model performance: Model proportion deviance explained: 0.173
In [18]:
print(adjusted.covars().mean().T)
source                      self     target  unadjusted
_is_na_gender[T.True]   0.086866   0.089800    0.088000
age_group[T.25-34]      0.307309   0.297400    0.300000
age_group[T.35-44]      0.273676   0.299200    0.156000
age_group[T.45+]        0.137604   0.206300    0.053000
gender[Female]          0.406342   0.455100    0.268000
gender[Male]            0.506792   0.455100    0.644000
gender[_NA]             0.086866   0.089800    0.088000
income                 10.060502  12.737608    6.297302

We see an improvement in the average ASMD. We can look at detailed list of ASMD values per variables using the following call.

In [19]:
print(adjusted.covars().asmd().T)
source                  self  unadjusted  unadjusted - self
age_group[T.25-34]  0.021676    0.005688          -0.015988
age_group[T.35-44]  0.055738    0.312711           0.256973
age_group[T.45+]    0.169759    0.378828           0.209069
gender[Female]      0.097907    0.375699           0.277792
gender[Male]        0.103798    0.379314           0.275516
gender[_NA]         0.010260    0.006296          -0.003965
income              0.205436    0.494217           0.288781
mean(asmd)          0.119494    0.326799           0.207304
In [20]:
print(adjusted.covars().kld().T)
source                  self  unadjusted  unadjusted - self
age_group[T.25-34]  0.000233    0.000016          -0.000217
age_group[T.35-44]  0.001580    0.055329           0.053749
age_group[T.45+]    0.015864    0.095205           0.079341
gender[Female]      0.004830    0.074156           0.069327
gender[Male]        0.005364    0.072046           0.066682
gender[_NA]         0.000053    0.000020          -0.000033
income              0.000773    0.114895           0.114122
mean(kld)           0.003360    0.071273           0.067913

We can also use KL divergence to summarize how far the sample covariates are from the target distribution across both numeric and categorical variables. The helper below aggregates over one-hot encoded categories and compares the adjusted sample to the original unadjusted sample.

In [21]:
print(adjusted.covars().kld(aggregate_by_main_covar=True).T)
source         self  unadjusted  unadjusted - self
age_group  0.005893    0.050183           0.044291
gender     0.003416    0.048741           0.045325
income     0.000773    0.114895           0.114122
mean(kld)  0.003360    0.071273           0.067913

It's easier to learn about the biases by just running .covars().plot() on our adjusted object.

In [22]:
adjusted.covars().plot()  # you could change sizes using something like .plot(width = 1500, height = 700)

We can also use different plots, using the seaborn library, for example with the "kde" dist_type.

In [23]:
# This shows how we could use seaborn to plot a kernel density estimation
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
No description has been provided for this image

Understanding the weights¶

We can look at the distribution of weights using the following call.

In [24]:
adjusted.weights().plot()
No description has been provided for this image

And get many summary statistics - including the design effect, effective sample size (ESS), and various quantiles and more, using:

In [25]:
# adjusted.weights().design_effect()
print(adjusted.weights().summary().round(2))
                                var       val
0                     design_effect      1.88
1       effective_sample_proportion      0.53
2             effective_sample_size    531.79
3                               sum  10000.00
4                    describe_count   1000.00
5                     describe_mean      1.00
6                      describe_std      0.94
7                      describe_min      0.30
8                      describe_25%      0.45
9                      describe_50%      0.65
10                     describe_75%      1.17
11                     describe_max     11.36
12                    prop(w < 0.1)      0.00
13                    prop(w < 0.2)      0.00
14                  prop(w < 0.333)      0.11
15                    prop(w < 0.5)      0.32
16                      prop(w < 1)      0.67
17                     prop(w >= 1)      0.33
18                     prop(w >= 2)      0.10
19                     prop(w >= 3)      0.03
20                     prop(w >= 5)      0.01
21                    prop(w >= 10)      0.00
22               nonparametric_skew      0.37
23  weighted_median_breakdown_point      0.21

Outcome analysis¶

In [26]:
# As we can see, the ci for unadjusted doesn't include the real value in the outcome, while the CI of the adjusted sample does include it.
# Also, the distance from the true value without adjustment is around 4 points, and after adjustment it's around 2 points.
print(adjusted.outcomes().summary())
1 outcomes: ['happiness']
Mean outcomes (with 95% confidence intervals):
source       self  target  unadjusted           self_ci         target_ci     unadjusted_ci
happiness  53.297  56.278      48.559  (52.097, 54.496)  (55.961, 56.595)  (47.669, 49.449)

Response rates (relative to number of respondents in sample):
   happiness
n     1000.0
%      100.0
Response rates (relative to notnull rows in the target):
    happiness
n     1000.0
%       10.0
Response rates (in the target):
    happiness
n    10000.0
%      100.0

The estimated mean happiness according to our sample is 48 without any adjustment and 54 with adjustment. The following show the distribution of happinnes:

In [27]:
adjusted.outcomes().plot()

Comparing Adjustment Methods¶

This section demonstrates how to compare different adjustment methods using balance. We'll compare the default logistic regression method with a Random Forest classifier to see how they affect covariate balance.

Both methods aim to reduce bias by creating weights, but they may perform differently depending on your data and use case.

In [28]:
from sklearn.ensemble import RandomForestClassifier

Adjust with Default Method (Logistic Regression)¶

First, let's adjust using the default IPW method with logistic regression:

In [29]:
# Adjust using default method (IPW with logistic regression)
adjusted_default = sample_with_target.adjust()
print(adjusted_default.summary())
INFO (2025-12-25 09:18:35,124) [ipw/ipw (line 622)]: Starting ipw function
INFO (2025-12-25 09:18:35,127) [adjustment/apply_transformations (line 469)]: Adding the variables: []
INFO (2025-12-25 09:18:35,128) [adjustment/apply_transformations (line 470)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:35,137) [adjustment/apply_transformations (line 507)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:35,145) [ipw/ipw (line 656)]: Building model matrix
INFO (2025-12-25 09:18:35,241) [ipw/ipw (line 678)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2025-12-25 09:18:35,242) [ipw/ipw (line 681)]: The number of columns in the model matrix: 16
INFO (2025-12-25 09:18:35,242) [ipw/ipw (line 682)]: The number of rows in the model matrix: 11000
INFO (2025-12-25 09:18:51,907) [ipw/ipw (line 843)]: Done with sklearn
INFO (2025-12-25 09:18:51,908) [ipw/ipw (line 845)]: max_de: None
INFO (2025-12-25 09:18:51,908) [ipw/ipw (line 867)]: Starting model selection
INFO (2025-12-25 09:18:51,912) [ipw/ipw (line 900)]: Chosen lambda: 0.041158338186664825
INFO (2025-12-25 09:18:51,912) [ipw/ipw (line 918)]: Proportion null deviance explained 0.17265121909892267
Adjustment details:
    method: ipw
    weight trimming mean ratio: 20
Covariate diagnostics:
    Covar ASMD reduction: 63.4%
    Covar ASMD (7 variables): 0.327 -> 0.119
    Covar mean KLD reduction: 95.3%
    Covar mean KLD (3 variables): 0.071 -> 0.003
Weight diagnostics:
    design effect (Deff): 1.880
    effective sample size proportion (ESSP): 0.532
    effective sample size (ESS): 531.8
Outcome weighted means:
            happiness
source               
self           53.297
target         56.278
unadjusted     48.559
Model performance: Model proportion deviance explained: 0.173

Adjust with Random Forest Classifier¶

Now let's adjust using a Random Forest classifier, which can capture non-linear relationships:

In [30]:
# Adjust using Random Forest classifier
rf = RandomForestClassifier(n_estimators=200, random_state=0)
adjusted_rf = sample_with_target.adjust(model=rf)
print(adjusted_rf.summary())
INFO (2025-12-25 09:18:52,702) [ipw/ipw (line 622)]: Starting ipw function
INFO (2025-12-25 09:18:52,704) [adjustment/apply_transformations (line 469)]: Adding the variables: []
INFO (2025-12-25 09:18:52,705) [adjustment/apply_transformations (line 470)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:52,714) [adjustment/apply_transformations (line 507)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2025-12-25 09:18:52,723) [ipw/ipw (line 656)]: Building model matrix
INFO (2025-12-25 09:18:52,819) [ipw/ipw (line 678)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2025-12-25 09:18:52,819) [ipw/ipw (line 681)]: The number of columns in the model matrix: 16
INFO (2025-12-25 09:18:52,820) [ipw/ipw (line 682)]: The number of rows in the model matrix: 11000
INFO (2025-12-25 09:18:53,690) [ipw/ipw (line 843)]: Done with sklearn
INFO (2025-12-25 09:18:53,691) [ipw/ipw (line 845)]: max_de: None
INFO (2025-12-25 09:18:53,694) [ipw/ipw (line 900)]: Chosen lambda: nan
INFO (2025-12-25 09:18:53,694) [ipw/ipw (line 918)]: Proportion null deviance explained 0.2083703232850005
Adjustment details:
    method: ipw
    weight trimming mean ratio: 20
Covariate diagnostics:
    Covar ASMD reduction: 72.1%
    Covar ASMD (7 variables): 0.327 -> 0.091
    Covar mean KLD reduction: 97.1%
    Covar mean KLD (3 variables): 0.071 -> 0.002
Weight diagnostics:
    design effect (Deff): 2.980
    effective sample size proportion (ESSP): 0.336
    effective sample size (ESS): 335.5
Outcome weighted means:
            happiness
source               
self           54.378
target         56.278
unadjusted     48.559
Model performance: Model proportion deviance explained: 0.208

Compare ASMD (Absolute Standardized Mean Difference)¶

Let's compare the covariate balance achieved by each method using ASMD tables:

In [31]:
print("\n=== Default Method ASMD ===")
print(adjusted_default.covars().asmd().T)

print("\n=== Random Forest ASMD ===")
print(adjusted_rf.covars().asmd().T)
=== Default Method ASMD ===
source                  self  unadjusted  unadjusted - self
age_group[T.25-34]  0.021676    0.005688          -0.015988
age_group[T.35-44]  0.055738    0.312711           0.256973
age_group[T.45+]    0.169759    0.378828           0.209069
gender[Female]      0.097907    0.375699           0.277792
gender[Male]        0.103798    0.379314           0.275516
gender[_NA]         0.010260    0.006296          -0.003965
income              0.205436    0.494217           0.288781
mean(asmd)          0.119494    0.326799           0.207304

=== Random Forest ASMD ===
source                  self  unadjusted  unadjusted - self
age_group[T.25-34]  0.074491    0.005688          -0.068804
age_group[T.35-44]  0.022383    0.312711           0.290328
age_group[T.45+]    0.145628    0.378828           0.233201
gender[Female]      0.037700    0.375699           0.337999
gender[Male]        0.067392    0.379314           0.311922
gender[_NA]         0.051718    0.006296          -0.045422
income              0.140655    0.494217           0.353562
mean(asmd)          0.091253    0.326799           0.235546

Interpreting the Results¶

Both methods produce adjusted weights that reduce bias. You can compare:

  • mean(asmd): Lower values indicate better overall covariate balance
  • Individual covariates: Check which method better balances specific variables
  • Design effect: From the summary output, which affects effective sample size

The choice between methods depends on your data characteristics and whether non-linear relationships are important.

Downloading data¶

Finally, we can prepare the data to be downloaded for future analyses.

In [32]:
adjusted.to_download()
Out[32]:
Click here to download: /tmp/tmp_balance_out_f4ee7bab-52cb-4eff-944d-ab01a2b0e430.csv
In [33]:
# We can prepare the data to be exported as csv - showing the first 500 charaacters for simplicity:
adjusted.to_csv()[0:500]
Out[33]:
'id,gender,age_group,income,happiness,weight\n0,Male,25-34,6.428659499046228,26.043028759747298,6.52832077256206\n1,Female,18-24,9.940280228116047,66.88548460632677,9.615962486362896\n2,Male,18-24,2.6736231547518043,37.091921916683006,3.5613441674585165\n3,,18-24,10.550307519418066,49.39405003271002,6.958765976140972\n4,,18-24,2.689993854299385,72.30420755038209,5.1335477016020254\n5,,35-44,5.995497722733131,57.28281646341816,16.44496201550232\n6,,18-24,12.63469573898972,31.663293445944596,8.19816512783'
In [34]:
# Sessions info
import session_info
session_info.show(html=False, dependencies=True)
-----
balance             0.14.0
pandas              2.3.3
plotly              6.5.0
session_info        v1.0.1
sklearn             1.3.2
-----
PIL                         11.3.0
anyio                       NA
arrow                       1.4.0
asttokens                   NA
attr                        25.4.0
attrs                       25.4.0
babel                       2.17.0
certifi                     2025.11.12
charset_normalizer          3.4.4
comm                        0.2.3
cycler                      0.12.1
cython_runtime              NA
dateutil                    2.9.0.post0
debugpy                     1.8.19
decorator                   5.2.1
defusedxml                  0.7.1
exceptiongroup              1.3.1
executing                   2.2.1
fastjsonschema              NA
fqdn                        NA
idna                        3.11
importlib_metadata          NA
importlib_resources         NA
ipykernel                   6.31.0
isoduration                 NA
jedi                        0.19.2
jinja2                      3.1.6
joblib                      1.5.3
json5                       0.12.1
jsonpointer                 3.0.0
jsonschema                  4.25.1
jsonschema_specifications   NA
jupyter_events              0.12.0
jupyter_server              2.17.0
jupyterlab_server           2.28.0
kiwisolver                  1.4.7
lark                        1.3.1
markupsafe                  3.0.3
matplotlib                  3.9.4
matplotlib_inline           0.2.1
mpl_toolkits                NA
narwhals                    2.14.0
nbformat                    5.10.4
numpy                       1.26.4
overrides                   NA
packaging                   25.0
parso                       0.8.5
patsy                       1.0.2
pexpect                     4.9.0
platformdirs                4.4.0
prometheus_client           NA
prompt_toolkit              3.0.52
psutil                      7.2.0
ptyprocess                  0.7.0
pure_eval                   0.2.3
pydev_ipython               NA
pydevconsole                NA
pydevd                      3.2.3
pydevd_file_utils           NA
pydevd_plugins              NA
pydevd_tracing              NA
pygments                    2.19.2
pyparsing                   3.3.1
pythonjsonlogger            NA
pytz                        2025.2
referencing                 NA
requests                    2.32.5
rfc3339_validator           0.1.4
rfc3986_validator           0.1.1
rfc3987_syntax              NA
rpds                        NA
scipy                       1.13.1
seaborn                     0.13.2
send2trash                  NA
six                         1.17.0
sphinxcontrib               NA
stack_data                  0.6.3
statsmodels                 0.14.6
threadpoolctl               3.6.0
tornado                     6.5.4
traitlets                   5.14.3
typing_extensions           NA
uri_template                NA
urllib3                     2.6.2
wcwidth                     0.2.14
webcolors                   NA
websocket                   1.9.0
yaml                        6.0.3
zipp                        NA
zmq                         27.1.0
zoneinfo                    NA
-----
IPython             8.18.1
jupyter_client      8.6.3
jupyter_core        5.8.1
jupyterlab          4.5.1
notebook            7.5.1
-----
Python 3.9.25 (main, Nov  3 2025, 15:16:36) [GCC 13.3.0]
Linux-6.11.0-1018-azure-x86_64-with-glibc2.39
-----
Session information updated at 2025-12-25 09:18