The effect size isn’t hard-coded; the prior distribution is a mixture of a “no effect” distribution, a “small effect” distribution, and a “significant effect” distribution, but the latter two are continuous over possible effect sizes (and overlapping). Here’s the graph of the prior, from the last post:

The parameters for the prior are fixed, but the posterior parameters (part of the variational distribution or “guide”) are adjusted to find the best fit. I didn’t include that here since it was in the last post, but here it is FWIW:

]]>def guide(patients):

n_trials = patients.size()[2]p_effect_sizes = pyro.param(

“p_effect_sizes”, common.make_tensor(PRIORS[‘p_effect_sizes’], n_trials),

constraint=constraints.simplex)mode_effect_size = pyro.param(

“mode_effect_size”, common.make_tensor(PRIORS[‘mode_effect_size’], n_trials),

constraint=constraints.interval(0, 1))effect_k = pyro.param(

“effect_k”, common.make_tensor(PRIORS[‘effect_k’], n_trials),

constraint=constraints.greater_than_eq(1.0))small_effect_k = pyro.param(

“small_effect_k”, common.make_tensor(PRIORS[‘small_effect_k’], n_trials),

constraint=constraints.greater_than_eq(1.0))distributions = make_distributions(mode_effect_size, effect_k, small_effect_k)

with pyro.plate(“world”):

which_world = pyro.sample(“which_world”, dist.Categorical(p_effect_sizes))

effect_size_raw = pyro.sample(

“effect_size”, common.mixture_n(which_world, distributions))