balance: transformations and formulasĀ¶
This tutorial focuses on the ways transformations, formulas and penalty can be included in your pre-processing of the coveriates before adjusting for them.
Example dataset - preparing the objectsĀ¶
The following is a toy simulated dataset.
For a more basic walkthrough of the elements in the next code block, please take a look at the tutorial: balance Quickstart: Analyzing and adjusting the bias on a simulated toy dataset
from balance import load_data
target_df, sample_df = load_data()
from balance import Sample
sample = Sample.from_frame(sample_df, outcome_columns=["happiness"])
target = Sample.from_frame(target_df, outcome_columns=["happiness"])
sample_with_target = sample.set_target(target)
sample_with_target
INFO (2024-12-06 18:43:19,586) [__init__/<module> (line 54)]: Using balance version 0.9.1
WARNING (2024-12-06 18:43:19,725) [util/guess_id_column (line 114)]: Guessed id column name id for the data
WARNING (2024-12-06 18:43:19,732) [sample_class/from_frame (line 261)]: No weights passed. Adding a 'weight' column and setting all values to 1
WARNING (2024-12-06 18:43:19,741) [util/guess_id_column (line 114)]: Guessed id column name id for the data
WARNING (2024-12-06 18:43:19,752) [sample_class/from_frame (line 261)]: No weights passed. Adding a 'weight' column and setting all values to 1
(balance.sample_class.Sample) balance Sample object with target set 1000 observations x 3 variables: gender,age_group,income id_column: id, weight_column: weight, outcome_columns: happiness target: balance Sample object 10000 observations x 3 variables: gender,age_group,income id_column: id, weight_column: weight, outcome_columns: happiness 3 common variables: gender,age_group,income
When trying to understand what an adjustment does, we can look at the model_coef items collected from the diagnostics method.
adjusted = sample_with_target.adjust(
# method="ipw", # default method
# transformations=None,
# formula=None,
# penalty_factor=None, # all 1s
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:19,768) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:19,771) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:19,771) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:19,781) [adjustment/apply_transformations (line 347)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:19,788) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:19,866) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:19,867) [ipw/ipw (line 482)]: The number of columns in the model matrix: 16
INFO (2024-12-06 18:43:19,867) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:19,874) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:21,135) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:21,139) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.0131066]
INFO (2024-12-06 18:43:21,140) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17168419]
INFO (2024-12-06 18:43:21,145) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:21,414) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.44902 | intercept |
40 | model_coef | 0.083208 | _is_na_gender[T.True] |
41 | model_coef | -0.530633 | age_group[T.25-34] |
42 | model_coef | -1.157103 | age_group[T.35-44] |
43 | model_coef | -1.805472 | age_group[T.45+] |
44 | model_coef | 0.663907 | gender[T.Male] |
45 | model_coef | 0.038028 | gender[T._NA] |
46 | model_coef | 0.192791 | income[Interval(-0.0009997440000000001, 0.44, ... |
47 | model_coef | 0.17287 | income[Interval(0.44, 1.664, closed='right')] |
48 | model_coef | 0.008734 | income[Interval(1.664, 3.472, closed='right')] |
49 | model_coef | -0.184156 | income[Interval(11.312, 15.139, closed='right')] |
50 | model_coef | -0.640679 | income[Interval(15.139, 20.567, closed='right')] |
51 | model_coef | -0.966821 | income[Interval(20.567, 29.504, closed='right')] |
52 | model_coef | -1.806676 | income[Interval(29.504, 128.536, closed='right')] |
53 | model_coef | 0.0 | income[Interval(3.472, 5.663, closed='right')] |
54 | model_coef | 0.0 | income[Interval(5.663, 8.211, closed='right')] |
55 | model_coef | -0.009001 | income[Interval(8.211, 11.312, closed='right')] |
As we can see from the glm coefficients, the age and gender groups got an extra NA column. And the income variable is bucketed into 10 buckets.
We can change these defaults by deciding on the specific transformation we want.
Let's start with NO transformations.
The transformation argument accepts either a dict or None. None indicates no transformations.
adjusted = sample_with_target.adjust(
# method="ipw",
transformations=None,
# formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:21,427) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:21,429) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:21,483) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:21,484) [ipw/ipw (line 482)]: The number of columns in the model matrix: 8
INFO (2024-12-06 18:43:21,485) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:21,491) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:22,949) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:22,953) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.01683977]
INFO (2024-12-06 18:43:22,955) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17253226]
INFO (2024-12-06 18:43:22,959) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:23,219) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 1.025147 | intercept |
40 | model_coef | 0.0 | _is_na_gender[T.True] |
41 | model_coef | -0.429053 | age_group[T.25-34] |
42 | model_coef | -1.049726 | age_group[T.35-44] |
43 | model_coef | -1.67491 | age_group[T.45+] |
44 | model_coef | -0.35982 | gender[Female] |
45 | model_coef | 0.312036 | gender[Male] |
46 | model_coef | 0.0 | gender[_NA] |
47 | model_coef | -0.054544 | income |
In this setting, income was treated as a numeric variable, with no transformations (e.g.: bucketing) on it. Regardless of the transformations, the model matrix made sure to turn the gender and age_group into dummy variables (including a column for NA).
Next we can fit a simple transformation.
Let's say we wanted to bucket age_groups groups that are smaller than 25% of the data, and use different bucketing on income, here is how we'd do it:
from balance.util import fct_lump, quantize
transformations = {
"age_group": lambda x: fct_lump(x, 0.25),
"gender": lambda x: x,
"income": lambda x: quantize(x.fillna(x.mean()), q=3),
}
adjusted = sample_with_target.adjust(
# method="ipw",
transformations=transformations,
# formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:23,233) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:23,236) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:23,236) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:23,243) [adjustment/apply_transformations (line 347)]: Final variables in output: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:23,250) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:23,327) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:23,328) [ipw/ipw (line 482)]: The number of columns in the model matrix: 8
INFO (2024-12-06 18:43:23,328) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:23,334) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:24,342) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:24,346) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.02499574]
INFO (2024-12-06 18:43:24,347) [ipw/ipw (line 593)]: Proportion null deviance explained [0.09302873]
WARNING (2024-12-06 18:43:24,350) [ipw/ipw (line 610)]: The propensity model has low fraction null deviance explained ([0.09302873]). Results may not be accurate
INFO (2024-12-06 18:43:24,355) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:24,610) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | -0.122845 | intercept |
40 | model_coef | 0.0 | _is_na_gender[T.True] |
41 | model_coef | -0.479657 | age_group[T.35-44] |
42 | model_coef | 0.102311 | age_group[T._lumped_other] |
43 | model_coef | 0.510107 | gender[T.Male] |
44 | model_coef | 0.0 | gender[T._NA] |
45 | model_coef | 0.250216 | income[Interval(-0.0009997440000000001, 4.194,... |
46 | model_coef | -0.871911 | income[Interval(13.693, 128.536, closed='right')] |
47 | model_coef | 0.0 | income[Interval(4.194, 13.693, closed='right')] |
As we can see - we managed to change the bucket sizes of income to have only 3 buckets, and we lumped the age_group to two groups (and collapsed together "small" buckets into the _lumped_other bucket).
Lastly, notice that if we omit a variable from transformations, it will not be available for the model construction (This behavior might change in the future).
transformations = {
# "age_group": lambda x: fct_lump(x, 0.25),
"gender": lambda x: x,
# "income": lambda x: quantize(x.fillna(x.mean()), q=3),
}
adjusted = sample_with_target.adjust(
# method="ipw",
transformations=transformations,
# formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:24,624) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:24,626) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:24,626) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['gender']
WARNING (2024-12-06 18:43:24,627) [adjustment/apply_transformations (line 343)]: Dropping the variables: ['age_group', 'income']
INFO (2024-12-06 18:43:24,628) [adjustment/apply_transformations (line 347)]: Final variables in output: ['gender']
INFO (2024-12-06 18:43:24,630) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:24,662) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['gender + _is_na_gender']
INFO (2024-12-06 18:43:24,662) [ipw/ipw (line 482)]: The number of columns in the model matrix: 4
INFO (2024-12-06 18:43:24,663) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:24,669) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:25,251) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:25,254) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.03209508]
INFO (2024-12-06 18:43:25,255) [ipw/ipw (line 593)]: Proportion null deviance explained [0.02608629]
WARNING (2024-12-06 18:43:25,257) [ipw/ipw (line 610)]: The propensity model has low fraction null deviance explained ([0.02608629]). Results may not be accurate
INFO (2024-12-06 18:43:25,261) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:25,518) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.005059 | intercept |
40 | model_coef | 0.0 | _is_na_gender[T.True] |
41 | model_coef | -0.355656 | gender[Female] |
42 | model_coef | 0.223436 | gender[Male] |
43 | model_coef | 0.0 | gender[_NA] |
As we can see, only gender was included in the model.
# TODO: add more examples about how add_na works
# TODO: add more examples about rare values in categorical variables and how they are grouped together.
Creating new variablesĀ¶
In the next example we will create several new transformations of income.
The info gives information on which variables were added, which were transformed, and what is the final variables in the output.
The x in the lambda function can have one of two meanings:
- When the keys in the dict match the exact names of the variables in the DataFrame (e.g.: "income"), then the lambda function treats x as the pandas.Series of that variable.
- If the name of the key does NOT exist in the DataFrame (e.g.: "income_squared"), then x will become the DataFrame of the data.
from balance.util import fct_lump, quantize
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
"income_squared": lambda x: x.income**2,
"income_buckets": lambda x: quantize(x.income.fillna(x.income.mean()), q=3),
}
adjusted = sample_with_target.adjust(
# method="ipw",
transformations=transformations,
# formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:25,535) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:25,538) [adjustment/apply_transformations (line 306)]: Adding the variables: ['income_squared', 'income_buckets']
INFO (2024-12-06 18:43:25,538) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:25,544) [adjustment/apply_transformations (line 347)]: Final variables in output: ['income_squared', 'income_buckets', 'age_group', 'gender', 'income']
INFO (2024-12-06 18:43:25,554) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:25,635) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income_squared + income_buckets + income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:25,635) [ipw/ipw (line 482)]: The number of columns in the model matrix: 11
INFO (2024-12-06 18:43:25,637) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:25,644) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:27,194) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:27,199) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.01593088]
INFO (2024-12-06 18:43:27,200) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17207485]
INFO (2024-12-06 18:43:27,205) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:27,466) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.738486 | intercept |
40 | model_coef | 0.034546 | _is_na_gender[T.True] |
41 | model_coef | -0.4542 | age_group[T.25-34] |
42 | model_coef | -1.080758 | age_group[T.35-44] |
43 | model_coef | -1.710418 | age_group[T.45+] |
44 | model_coef | 0.616218 | gender[T.Male] |
45 | model_coef | 0.017563 | gender[T._NA] |
46 | model_coef | -0.051628 | income |
47 | model_coef | 0.0 | income_buckets[Interval(-0.0009997440000000001... |
48 | model_coef | -0.094372 | income_buckets[Interval(13.693, 128.536, close... |
49 | model_coef | 0.0 | income_buckets[Interval(4.194, 13.693, closed=... |
50 | model_coef | 0.0 | income_squared |
FormulaĀ¶
The formula can accept a list of strings indicating how to combine the transformed variables together. It follows the formula notation from patsy.
For example, we can have an interaction between age_group and gender:
from balance.util import fct_lump_by, quantize
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: quantize(x.fillna(x.mean()), q=20),
}
formula = ["age_group * gender"]
# the penalty is per elemnt in the list of formula:
# penalty_factor = [0.1, 0.1, 0.1]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:27,479) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:27,482) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:27,482) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:27,487) [adjustment/apply_transformations (line 347)]: Final variables in output: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:27,493) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:27,546) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group * gender']
INFO (2024-12-06 18:43:27,547) [ipw/ipw (line 482)]: The number of columns in the model matrix: 12
INFO (2024-12-06 18:43:27,548) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:27,554) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:28,695) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:28,698) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.02478636]
INFO (2024-12-06 18:43:28,700) [ipw/ipw (line 593)]: Proportion null deviance explained [0.11329577]
INFO (2024-12-06 18:43:28,704) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:28,968) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | -0.372754 | intercept |
40 | model_coef | 0.817785 | age_group[18-24] |
41 | model_coef | 0.0 | age_group[25-34] |
42 | model_coef | -0.357283 | age_group[35-44] |
43 | model_coef | -0.922517 | age_group[45+] |
44 | model_coef | 0.0 | age_group[T.25-34]:gender[T.Male] |
45 | model_coef | 0.0 | age_group[T.25-34]:gender[T._NA] |
46 | model_coef | 0.0 | age_group[T.35-44]:gender[T.Male] |
47 | model_coef | 0.0 | age_group[T.35-44]:gender[T._NA] |
48 | model_coef | 0.0 | age_group[T.45+]:gender[T.Male] |
49 | model_coef | 0.0 | age_group[T.45+]:gender[T._NA] |
50 | model_coef | 0.530983 | gender[T.Male] |
51 | model_coef | 0.0 | gender[T._NA] |
As we can see, the formula makes it so that we have combinations of age_group and gender, as well as a main effects of age_group and gender. Since income was not in the formula, it is not included in the model.
Formula and penalty_factorĀ¶
The formula can be provided as several strings, and then the penalty factor can indicate how much the model should focus to adjust to that element of the formula. Larger penalty factors means that element will be less corrected.
The next two examples shows how in one case we focus on correcting for income, and in the second case we focus to correct for age and gender.
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
}
formula = ["age_group + gender", "income"]
# the penalty is per elemnt in the list of formula:
penalty_factor = [10, 0.1]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:28,982) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:28,985) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:28,985) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:28,987) [adjustment/apply_transformations (line 347)]: Final variables in output: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:28,992) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:29,046) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group + gender', 'income']
INFO (2024-12-06 18:43:29,047) [ipw/ipw (line 482)]: The number of columns in the model matrix: 7
INFO (2024-12-06 18:43:29,047) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:29,054) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:29,715) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:29,718) [ipw/ipw (line 585)]: Chosen lambda for cv: [2.81364473]
INFO (2024-12-06 18:43:29,719) [ipw/ipw (line 593)]: Proportion null deviance explained [0.06543669]
WARNING (2024-12-06 18:43:29,721) [ipw/ipw (line 610)]: The propensity model has low fraction null deviance explained ([0.06543669]). Results may not be accurate
INFO (2024-12-06 18:43:29,725) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:29,979) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.440079 | intercept |
40 | model_coef | 0.0 | age_group[18-24] |
41 | model_coef | 0.0 | age_group[25-34] |
42 | model_coef | 0.0 | age_group[35-44] |
43 | model_coef | 0.0 | age_group[45+] |
44 | model_coef | 0.0 | gender[T.Male] |
45 | model_coef | 0.0 | gender[T._NA] |
46 | model_coef | -0.048207 | income |
The above example corrected more to income. As we can see, age and gender got 0 correction (since their penalty was so high). Let's now over correct for age and gender:
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
}
formula = ["age_group + gender", "income"]
# the penalty is per elemnt in the list of formula:
penalty_factor = [0.1, 10] # this is flipped
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:29,993) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:29,995) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:29,996) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:29,997) [adjustment/apply_transformations (line 347)]: Final variables in output: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:30,003) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:30,055) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group + gender', 'income']
INFO (2024-12-06 18:43:30,056) [ipw/ipw (line 482)]: The number of columns in the model matrix: 7
INFO (2024-12-06 18:43:30,057) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:30,064) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:30,971) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:30,974) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.33591375]
INFO (2024-12-06 18:43:30,976) [ipw/ipw (line 593)]: Proportion null deviance explained [0.11475535]
INFO (2024-12-06 18:43:30,982) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:31,238) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | -0.375686 | intercept |
40 | model_coef | 0.824489 | age_group[18-24] |
41 | model_coef | 0.0 | age_group[25-34] |
42 | model_coef | -0.387591 | age_group[35-44] |
43 | model_coef | -0.968518 | age_group[45+] |
44 | model_coef | 0.554579 | gender[T.Male] |
45 | model_coef | 0.0 | gender[T._NA] |
46 | model_coef | 0.0 | income |
In the above case, income basically got 0 correction.
We can add two versions of income, and give each of them a higher penalty than the age and gender:
from balance.util import fct_lump_by, quantize
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
"income_buckets": lambda x: quantize(x.income.fillna(x.mean()), q=4),
}
formula = ["age_group + gender", "income", "income_buckets"]
# the penalty is per elemnt in the list of formula:
penalty_factor = [1, 2, 2]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:31,253) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:31,256) [adjustment/apply_transformations (line 306)]: Adding the variables: ['income_buckets']
INFO (2024-12-06 18:43:31,256) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
/tmp/ipykernel_2109/861112312.py:7: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction. INFO (2024-12-06 18:43:31,272) [adjustment/apply_transformations (line 347)]: Final variables in output: ['income_buckets', 'age_group', 'gender', 'income']
INFO (2024-12-06 18:43:31,281) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:31,362) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group + gender', 'income', 'income_buckets']
INFO (2024-12-06 18:43:31,363) [ipw/ipw (line 482)]: The number of columns in the model matrix: 11
INFO (2024-12-06 18:43:31,363) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:31,370) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:32,770) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:32,774) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.02312964]
INFO (2024-12-06 18:43:32,775) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17210643]
INFO (2024-12-06 18:43:32,779) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:33,044) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.019333 | intercept |
40 | model_coef | 0.798821 | age_group[18-24] |
41 | model_coef | 0.0 | age_group[25-34] |
42 | model_coef | -0.450906 | age_group[35-44] |
43 | model_coef | -1.075619 | age_group[45+] |
44 | model_coef | 0.609681 | gender[T.Male] |
45 | model_coef | 0.039677 | gender[T._NA] |
46 | model_coef | -0.040336 | income |
47 | model_coef | 0.0 | income_buckets[Interval(-0.0009997440000000001... |
48 | model_coef | -0.156494 | income_buckets[Interval(17.694, 128.536, close... |
49 | model_coef | 0.0 | income_buckets[Interval(2.53, 8.211, closed='r... |
50 | model_coef | 0.0 | income_buckets[Interval(8.211, 17.694, closed=... |
Another way is to create a formula for several variations of each variable, and give each a penalty of 1. For example:
from balance.util import fct_lump_by, quantize
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
"income_buckets": lambda x: quantize(x.income.fillna(x.mean()), q=4),
}
formula = ["age_group", "gender", "income + income_buckets"]
# the penalty is per elemnt in the list of formula:
penalty_factor = [1, 1, 1]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
penalty_factor=penalty_factor,
# max_de=None,
)
adj_diag = adjusted.diagnostics()
adj_diag.query("metric == 'model_coef'")
INFO (2024-12-06 18:43:33,058) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:33,061) [adjustment/apply_transformations (line 306)]: Adding the variables: ['income_buckets']
INFO (2024-12-06 18:43:33,061) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
/tmp/ipykernel_2109/1264082352.py:7: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction. INFO (2024-12-06 18:43:33,077) [adjustment/apply_transformations (line 347)]: Final variables in output: ['income_buckets', 'age_group', 'gender', 'income']
INFO (2024-12-06 18:43:33,086) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:33,168) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group', 'gender', 'income + income_buckets']
INFO (2024-12-06 18:43:33,169) [ipw/ipw (line 482)]: The number of columns in the model matrix: 12
INFO (2024-12-06 18:43:33,170) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:33,176) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:34,628) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:34,631) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.02344857]
INFO (2024-12-06 18:43:34,633) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17211675]
INFO (2024-12-06 18:43:34,637) [sample_class/diagnostics (line 839)]: Starting computation of diagnostics of the fitting
INFO (2024-12-06 18:43:34,901) [sample_class/diagnostics (line 1039)]: Done computing diagnostics
metric | val | var | |
---|---|---|---|
39 | model_coef | 0.329009 | intercept |
40 | model_coef | 0.773776 | age_group[18-24] |
41 | model_coef | 0.0 | age_group[25-34] |
42 | model_coef | -0.35631 | age_group[35-44] |
43 | model_coef | -0.934014 | age_group[45+] |
44 | model_coef | -0.318703 | gender[Female] |
45 | model_coef | 0.274773 | gender[Male] |
46 | model_coef | 0.0 | gender[_NA] |
47 | model_coef | -0.044009 | income |
48 | model_coef | 0.0 | income_buckets[Interval(-0.0009997440000000001... |
49 | model_coef | -0.197619 | income_buckets[Interval(17.694, 128.536, close... |
50 | model_coef | 0.0 | income_buckets[Interval(2.53, 8.211, closed='r... |
51 | model_coef | 0.0 | income_buckets[Interval(8.211, 17.694, closed=... |
# Defaults from the package
adjusted = sample_with_target.adjust(
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
INFO (2024-12-06 18:43:34,914) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:34,917) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:34,917) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:34,926) [adjustment/apply_transformations (line 347)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:34,932) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:35,009) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:35,010) [ipw/ipw (line 482)]: The number of columns in the model matrix: 16
INFO (2024-12-06 18:43:35,010) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:35,017) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:36,259) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:36,262) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.0131066]
INFO (2024-12-06 18:43:36,264) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17168419]
Covar ASMD reduction: 59.7%, design effect: 1.897 Covar ASMD (7 variables): 0.327 -> 0.132 Model performance: Model proportion deviance explained: 0.172 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 53.389 56.278 48.559 (52.183, 54.595) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
# No transformations at all
# transformations = None is just like using:
# transformations = {
# "age_group": lambda x: x,
# "gender": lambda x: x,
# "income": lambda x: x,
# }
adjusted = sample_with_target.adjust(
method="ipw",
transformations=None,
# formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
# slightly smaller design effect, slightly better ASMD reduction.
INFO (2024-12-06 18:43:37,199) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:37,201) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:37,255) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:37,256) [ipw/ipw (line 482)]: The number of columns in the model matrix: 8
INFO (2024-12-06 18:43:37,256) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:37,263) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:38,718) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:38,722) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.01683977]
INFO (2024-12-06 18:43:38,724) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17253226]
Covar ASMD reduction: 68.1%, design effect: 2.196 Covar ASMD (7 variables): 0.327 -> 0.104 Model performance: Model proportion deviance explained: 0.173 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 53.416 56.278 48.559 (52.184, 54.648) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
# No transformations at all
transformations = None
# But passing a squared term of income to the formula:
formula = ["age_group + gender + income + income**2"]
# the penalty is per elemnt in the list of formula:
# penalty_factor = [1]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
# penalty_factor=penalty_factor,
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
# Adding income**2 to the formula led to lower Deff but also lower ASMD reduction.
INFO (2024-12-06 18:43:39,631) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:39,632) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:39,686) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group + gender + income + income**2']
INFO (2024-12-06 18:43:39,686) [ipw/ipw (line 482)]: The number of columns in the model matrix: 7
INFO (2024-12-06 18:43:39,687) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:39,694) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:40,921) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:40,924) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.02098571]
INFO (2024-12-06 18:43:40,926) [ipw/ipw (line 593)]: Proportion null deviance explained [0.17160363]
Covar ASMD reduction: 56.9%, design effect: 2.005 Covar ASMD (7 variables): 0.327 -> 0.141 Model performance: Model proportion deviance explained: 0.172 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 52.692 56.278 48.559 (51.508, 53.877) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
"income": lambda x: x,
"income_buckets": lambda x: quantize(x.income.fillna(x.income.mean()), q=20),
}
formula = ["age_group + gender", "income_buckets"]
# the penalty is per elemnt in the list of formula:
penalty_factor = [1, 0.1]
adjusted = sample_with_target.adjust(
method="ipw",
transformations=transformations,
formula=formula,
penalty_factor=penalty_factor,
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
# By adding income_buckets and using it instead of income, as well as putting more weight in it in terms of penalty
# we managed to correct income quite well, but at the expense of age and gender.
INFO (2024-12-06 18:43:41,906) [ipw/ipw (line 421)]: Starting ipw function
INFO (2024-12-06 18:43:41,909) [adjustment/apply_transformations (line 306)]: Adding the variables: ['income_buckets']
INFO (2024-12-06 18:43:41,910) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender', 'income']
INFO (2024-12-06 18:43:41,916) [adjustment/apply_transformations (line 347)]: Final variables in output: ['income_buckets', 'age_group', 'gender', 'income']
INFO (2024-12-06 18:43:41,925) [ipw/ipw (line 455)]: Building model matrix
INFO (2024-12-06 18:43:42,006) [ipw/ipw (line 479)]: The formula used to build the model matrix: ['age_group + gender', 'income_buckets']
INFO (2024-12-06 18:43:42,007) [ipw/ipw (line 482)]: The number of columns in the model matrix: 26
INFO (2024-12-06 18:43:42,007) [ipw/ipw (line 483)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:42,014) [ipw/ipw (line 514)]: Fitting logistic model
INFO (2024-12-06 18:43:43,468) [ipw/ipw (line 555)]: max_de: None
INFO (2024-12-06 18:43:43,472) [ipw/ipw (line 585)]: Chosen lambda for cv: [0.0074004]
INFO (2024-12-06 18:43:43,473) [ipw/ipw (line 593)]: Proportion null deviance explained [0.1759225]
Covar ASMD reduction: 60.2%, design effect: 2.174 Covar ASMD (7 variables): 0.327 -> 0.130 Model performance: Model proportion deviance explained: 0.176 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 52.211 56.278 48.559 (50.998, 53.425) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
CBPSĀ¶
Let's see if we can improve on CBPS a bit.
# Defaults from the package
adjusted = sample_with_target.adjust(
method = "cbps",
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library = "seaborn", dist_type = "kde")
# CBPS already corrects a lot. Let's see if we can make it correct a tiny bit more.
INFO (2024-12-06 18:43:44,384) [cbps/cbps (line 411)]: Starting cbps function
INFO (2024-12-06 18:43:44,387) [adjustment/apply_transformations (line 306)]: Adding the variables: []
INFO (2024-12-06 18:43:44,387) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:44,397) [adjustment/apply_transformations (line 347)]: Final variables in output: ['gender', 'age_group', 'income']
INFO (2024-12-06 18:43:44,481) [cbps/cbps (line 462)]: The formula used to build the model matrix: ['income + gender + age_group + _is_na_gender']
INFO (2024-12-06 18:43:44,482) [cbps/cbps (line 474)]: The number of columns in the model matrix: 16
INFO (2024-12-06 18:43:44,483) [cbps/cbps (line 475)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:44,496) [cbps/cbps (line 537)]: Finding initial estimator for GMM optimization
INFO (2024-12-06 18:43:44,600) [cbps/cbps (line 564)]: Finding initial estimator for GMM optimization that minimizes the balance loss
WARNING (2024-12-06 18:43:44,992) [cbps/cbps (line 581)]: Convergence of bal_loss function has failed due to 'Maximum number of function evaluations has been exceeded.'
INFO (2024-12-06 18:43:44,993) [cbps/cbps (line 599)]: Running GMM optimization
WARNING (2024-12-06 18:43:45,522) [cbps/cbps (line 614)]: Convergence of gmm_loss function with gmm_init start point has failed due to 'Maximum number of function evaluations has been exceeded.'
WARNING (2024-12-06 18:43:46,057) [cbps/cbps (line 632)]: Convergence of gmm_loss function with beta_balance start point has failed due to 'Maximum number of function evaluations has been exceeded.'
INFO (2024-12-06 18:43:46,064) [cbps/cbps (line 728)]: Done cbps function
Covar ASMD reduction: 77.6%, design effect: 2.782 Covar ASMD (7 variables): 0.327 -> 0.073 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 54.389 56.278 48.559 (53.02, 55.757) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
import numpy as np
# No transformations at all
transformations = {
"age_group": lambda x: x,
"gender": lambda x: x,
# "income": lambda x: x,
"income_log": lambda x: np.log(x.income.fillna(x.income.mean())),
"income_buckets": lambda x: quantize(x.income.fillna(x.income.mean()), q=5),
}
formula = ["age_group + gender + income_log * income_buckets"]
adjusted = sample_with_target.adjust(
method="cbps",
transformations=transformations,
formula=formula,
# penalty_factor=penalty_factor, # CBPS seems to ignore the penalty factor.
# max_de=None,
)
print(adjusted.summary())
print(adjusted.outcomes().summary())
adjusted.covars().plot(library="seaborn", dist_type="kde")
# Trying various transformations gives slightly different results (some effect on the outcome, Deff and ASMD) - but nothing too major here.
INFO (2024-12-06 18:43:46,973) [cbps/cbps (line 411)]: Starting cbps function
INFO (2024-12-06 18:43:46,975) [adjustment/apply_transformations (line 306)]: Adding the variables: ['income_log', 'income_buckets']
INFO (2024-12-06 18:43:46,976) [adjustment/apply_transformations (line 307)]: Transforming the variables: ['age_group', 'gender']
WARNING (2024-12-06 18:43:46,982) [adjustment/apply_transformations (line 343)]: Dropping the variables: ['income']
INFO (2024-12-06 18:43:46,983) [adjustment/apply_transformations (line 347)]: Final variables in output: ['income_log', 'income_buckets', 'age_group', 'gender']
INFO (2024-12-06 18:43:47,071) [cbps/cbps (line 462)]: The formula used to build the model matrix: ['age_group + gender + income_log * income_buckets']
INFO (2024-12-06 18:43:47,073) [cbps/cbps (line 474)]: The number of columns in the model matrix: 15
INFO (2024-12-06 18:43:47,073) [cbps/cbps (line 475)]: The number of rows in the model matrix: 11000
INFO (2024-12-06 18:43:47,094) [cbps/cbps (line 537)]: Finding initial estimator for GMM optimization
INFO (2024-12-06 18:43:47,213) [cbps/cbps (line 564)]: Finding initial estimator for GMM optimization that minimizes the balance loss
INFO (2024-12-06 18:43:47,577) [cbps/cbps (line 599)]: Running GMM optimization
WARNING (2024-12-06 18:43:48,104) [cbps/cbps (line 614)]: Convergence of gmm_loss function with gmm_init start point has failed due to 'Maximum number of function evaluations has been exceeded.'
WARNING (2024-12-06 18:43:48,617) [cbps/cbps (line 632)]: Convergence of gmm_loss function with beta_balance start point has failed due to 'Maximum number of function evaluations has been exceeded.'
INFO (2024-12-06 18:43:48,624) [cbps/cbps (line 728)]: Done cbps function
Covar ASMD reduction: 82.2%, design effect: 3.042 Covar ASMD (7 variables): 0.327 -> 0.058 1 outcomes: ['happiness'] Mean outcomes (with 95% confidence intervals): source self target unadjusted self_ci target_ci unadjusted_ci happiness 54.42 56.278 48.559 (53.03, 55.81) (55.961, 56.595) (47.669, 49.449) Response rates (relative to number of respondents in sample): happiness n 1000.0 % 100.0 Response rates (relative to notnull rows in the target): happiness n 1000.0 % 10.0 Response rates (in the target): happiness n 10000.0 % 100.0
# Sessions info
import session_info
session_info.show(html=False, dependencies=True)
----- balance 0.9.1 numpy 1.24.4 pandas 1.4.3 session_info 1.0.0 ----- PIL 11.0.0 anyio NA apport_python_hook NA argcomplete NA arrow 1.3.0 asttokens NA attr 24.2.0 attrs 24.2.0 babel 2.16.0 beta_ufunc NA binom_ufunc NA certifi 2020.06.20 chardet 4.0.0 charset_normalizer 3.4.0 colorama 0.4.4 comm 0.2.2 coxnet NA cvcompute NA cvelnet NA cvfishnet NA cvglmnet NA cvglmnetCoef NA cvglmnetPredict NA cvlognet NA cvmrelnet NA cvmultnet NA cycler 0.12.1 cython_runtime NA dateutil 2.9.0.post0 debugpy 1.8.9 decorator 5.1.1 defusedxml 0.7.1 elnet NA exceptiongroup 1.2.2 executing 2.1.0 fastjsonschema NA fishnet NA fqdn NA gi 3.42.1 gio NA glib NA glmnet NA glmnetCoef NA glmnetControl NA glmnetPredict NA glmnetSet NA glmnet_python NA gobject NA gtk NA hypergeom_ufunc NA idna 3.3 ipfn NA ipykernel 6.29.5 isoduration NA jedi 0.19.2 jinja2 3.1.4 joblib 1.4.2 json5 0.10.0 jsonpointer 2.0 jsonschema 4.23.0 jsonschema_specifications NA jupyter_events 0.10.0 jupyter_server 2.14.2 jupyterlab_server 2.27.3 kiwisolver 1.4.7 loadGlmLib NA lognet NA markupsafe 2.0.1 matplotlib 3.9.3 matplotlib_inline 0.1.7 mpl_toolkits NA mrelnet NA nbformat 5.10.4 nbinom_ufunc NA ncf_ufunc NA overrides NA packaging 24.2 parso 0.8.4 patsy 1.0.1 platformdirs 4.3.6 plotly 5.24.1 prometheus_client NA prompt_toolkit 3.0.48 psutil 6.1.0 pure_eval 0.2.3 pydev_ipython NA pydevconsole NA pydevd 3.2.3 pydevd_file_utils NA pydevd_plugins NA pydevd_tracing NA pygments 2.18.0 pyparsing 2.4.7 pythonjsonlogger NA pytz 2022.1 referencing NA requests 2.32.3 rfc3339_validator 0.1.4 rfc3986_validator 0.1.1 rpds NA scipy 1.9.1 seaborn 0.13.0 send2trash NA sitecustomize NA six 1.16.0 sklearn 1.5.2 sniffio 1.3.1 sphinxcontrib NA stack_data 0.6.3 statsmodels 0.14.4 threadpoolctl 3.5.0 tornado 6.4.2 traitlets 5.14.3 typing_extensions NA uri_template NA urllib3 1.26.5 wcwidth 0.2.13 webcolors NA websocket 1.8.0 wtmean NA yaml 5.4.1 zmq 26.2.0 zoneinfo NA zope NA ----- IPython 8.30.0 jupyter_client 8.6.3 jupyter_core 5.7.2 jupyterlab 4.3.2 notebook 7.3.1 ----- Python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] Linux-6.5.0-1025-azure-x86_64-with-glibc2.35 ----- Session information updated at 2024-12-06 18:43