0

I am getting a -inf or nan value from my PyMC model and wanted some help to debug it and figure out where the problem is. Here is my model:

# ---------------------------------------------------------------
## PyMC Model
# ---------------------------------------------------------------

# ---------------------------------------------------------------
# Define the time points for the simulated data
t_span = [0, 0.5]
t_eval = np.linspace(t_span[0], t_span[1], 100)
# Initial Conditions of the ODE
Ca0 = 50
Y0 = [Ca0]
# ---------------------------------------------------------------
# Define the ODE system to solve using PyMC
pymc_nuts_ode = DifferentialEquation(
    func=NS.rhs_swapped             ,  # The ODE system to solve
    times=noisy_data.time.values    ,  # Array of time points to solve the ODE at
    n_states=1                      ,  # Dimension of the ODE (number of dependent variables)
    n_theta=2                       ,  # Dimension of theta (number of parameters)
    t0=noisy_data.time.values[0]    ,  # The initial time
)
# ---------------------------------------------------------------
# Prior means (informative priors - true values)
k = 0.18
alpha = 2
theta = [k, alpha]
# ---------------------------------------------------------------
# PyMC Model
with pm.Model() as model:
    # Priors
        # theta - parameters of the ODE
    k = pm.TruncatedNormal('k', mu=theta[0], sigma=0.01, lower=0, initval=theta[0])
    alpha = pm.TruncatedNormal('alpha', mu=theta[1], sigma=0.1, lower=0, initval=theta[1])
    theta = [k, alpha]
        # y0 - initial conditions
    Ca0 = pm.TruncatedNormal('Ca0', mu=Y0[0], sigma=0.1, lower=0 , initval=Y0[0])
    y0 = [Ca0]
    # Ca0 = pm.ConstantData('Ca0', value=Y0[0])


        # ODE solution
    ode_solution = pymc_nuts_ode(theta=theta, y0=y0)

    # Likelihood
    # ode_solution = pymc_nuts_ode(theta=[k, alpha], y0=[Ca0])
        # sigma - standard deviation for the likelihood
    sigma = pm.HalfNormal('sigma', 1)
        # Likelihood of the ODE solution
    likelihood = pm.Normal("Y_obs", mu=ode_solution, sigma=sigma, observed=noisy_data[["Ca"]].values) 

Sampling with either NUTS or Slice gives me the same problem with -inf or nan for my Y_obs variable:

# ----------------------------------------
## Slice Sampler
# ----------------------------------------

# Variable list to give to the sample step parameter
vars_list = list(model.values_to_rvs.keys())[:-1]

# Specify the sampler
sampler = 'Slice Sampler'
tune = draws = 1000

if __name__ == '__main__':
    model.debug()
    trace_slice = NS.run_model_slice(vars_list=vars_list, model=model, tune=tune, draws=draws, chains=4, cores=4)
    # Plot
    az.summary(trace_slice)
    az.plot_trace(trace_slice, kind="rank_bars")
    plt.suptitle(f"Trace Plot {sampler}")
    plt.tight_layout()

fig, ax = plt.subplots(figsize=(12, 4))
NS.plot_inference(data=noisy_data, ax=ax, trace=trace_slice, title=f"Posterior Inference {sampler}")
The variable Y_obs has the following parameters:
0: DifferentialEquation{func=<function rhs_swapped at 0x000001DFD3A4E8E0>, times=(0.0, 0.005050505050505051, 0.010101010101010102, 0.015151515151515152, 0.020202020202020204, 0.025252525252525256, 0.030303030303030304, 0.03535353535353536, 0.04040404040404041, 0.045454545454545456, 0.05050505050505051, 0.05555555555555556, 0.06060606060606061, 0.06565656565656566, 0.07070707070707072, 0.07575757575757576, 0.08080808080808081, 0.08585858585858587, 0.09090909090909091, 0.09595959595959597, 0.10101010101010102, 0.10606060606060606, 0.11111111111111112, 0.11616161616161617, 0.12121212121212122, 0.12626262626262627, 0.13131313131313133, 0.13636363636363638, 0.14141414141414144, 0.14646464646464646, 0.15151515151515152, 0.15656565656565657, 0.16161616161616163, 0.16666666666666669, 0.17171717171717174, 0.1767676767676768, 0.18181818181818182, 0.18686868686868688, 0.19191919191919193, 0.196969696969697, 0.20202020202020204, 0.2070707070707071, 0.21212121212121213, 0.21717171717171718, 0.22222222222222224, 0.2272727272727273, 0.23232323232323235, 0.2373737373737374, 0.24242424242424243, 0.2474747474747475, 0.25252525252525254, 0.2575757575757576, 0.26262626262626265, 0.2676767676767677, 0.27272727272727276, 0.2777777777777778, 0.2828282828282829, 0.2878787878787879, 0.29292929292929293, 0.297979797979798, 0.30303030303030304, 0.3080808080808081, 0.31313131313131315, 0.31818181818181823, 0.32323232323232326, 0.3282828282828283, 0.33333333333333337, 0.3383838383838384, 0.3434343434343435, 0.3484848484848485, 0.3535353535353536, 0.3585858585858586, 0.36363636363636365, 0.36868686868686873, 0.37373737373737376, 0.37878787878787884, 0.38383838383838387, 0.3888888888888889, 0.393939393939394, 0.398989898989899, 0.4040404040404041, 0.4090909090909091, 0.4141414141414142, 0.4191919191919192, 0.42424242424242425, 0.42929292929292934, 0.43434343434343436, 0.43939393939393945, 0.4444444444444445, 0.44949494949494956, 0.4545454545454546, 0.4595959595959596, 0.4646464646464647, 0.4696969696969697, 0.4747474747474748, 0.47979797979797983, 0.48484848484848486, 0.48989898989898994, 0.494949494949495, 0.5), n_states=1, n_theta=2, t0=0.0}.0 [id A] <Matrix(float64, shape=(?, ?))>
 ├─ MakeVector{dtype='float64'} [id B] <Vector(float64, shape=(1,))>
 │  └─ Exp [id C] <Scalar(float64, shape=())> 'Ca0'
 │     └─ Ca0_interval__ [id D] <Scalar(float64, shape=())>
 └─ MakeVector{dtype='float64'} [id E] <Vector(float64, shape=(2,))>
    ├─ Exp [id F] <Scalar(float64, shape=())> 'k'
    │  └─ k_interval__ [id G] <Scalar(float64, shape=())>
    └─ Exp [id H] <Scalar(float64, shape=())> 'alpha'
       └─ alpha_interval__ [id I] <Scalar(float64, shape=())>
1: Exp [id J] <Scalar(float64, shape=())> 'sigma'
 └─ sigma_log__ [id K] <Scalar(float64, shape=())>
c:\Users\---\anaconda3\envs\pymc_env\Lib\site-packages\pymc\ode\ode.py:130: ODEintWarning: Excess accuracy requested (tolerances too small). Run with full_output = 1 to get quantitative information.
  sol = scipy.integrate.odeint(
c:\Users\---\anaconda3\envs\pymc_env\Lib\site-packages\pymc\ode\ode.py:130: ODEintWarning: Excess accuracy requested (tolerances too small). Run with full_output = 1 to get quantitative information.
  sol = scipy.integrate.odeint(
The parameters evaluate to:
0: [[5.00000000e+001]
 [5.04159234e+001]
 [5.04159234e+001]
 [1.39736850e-076]
 [9.16442737e-072]
 [3.24249365e-086]
 [7.86207943e-067]
 [1.05177001e-153]
 [1.74349530e-076]
 [1.05206420e-153]
 [2.21420214e-052]
 [6.98347617e-077]
 [7.27131731e-043]
 [3.24249370e-086]
 [2.52742153e-052]
 [1.30354289e-076]
 [1.48440629e-076]
 [9.72377416e-072]
 [1.17767151e-047]
 [1.24475475e-047]
 [4.18108097e-062]
 [2.74073548e-057]
 [1.38294297e-047]
 [1.45136677e-047]
...
 value = 30.99585709980524 -> logp = nan
 value = 30.065105897128564 -> logp = nan

I gave the slice samlping as the example because both the slice and NUTS give the same problem. I believe my priors to be well specified, and the mean values used are the 'true' values used to generate the data.

I have a working model of the Lotka-Voltera model from the documentation with all the different sampling methods used there and this error did not pop up there.

Any help in fixing this would be appreciated.

I have no place to start.

Edit:

The NS object is a python script with some plotting functions, and a function for the rhs of the ode and running the model. Below are the functions excludingplotting functions:

def rhs_swapped(Y, t, theta):
    """
    Right-hand side of the ODE system.
    """
    # Y = [0 = Ca]
    # theta = [0 = k , 1 = alpha]

    # Differential equations
    # dCadt = -theta[0] * Y[0] ** theta[1]
    dCadt = pow((-theta[0] * Y[0]), theta[1])
    dYdt = [dCadt] # Vector of derivatives
    return dYdt

def run_model_slice(vars_list, model, tune, draws, chains=4, cores=4):
    with model:
        trace = pm.sample(step=[pm.Slice(vars_list)], tune=tune, draws=draws, chains=chains, cores=cores)
    return trace

def run_model_nuts(model, tune, draws):
    with model:
        trace = pm.sample(tune=tune, draws=draws)
    return trace

2
  • What is the NS module/object that you are using? Commented Apr 16, 2024 at 9:27
  • a script that contains the rhs function, and some plotting functions (similar plotting to how the documentation for PyMC did it in their Lotka-Volttera examples). Check the edit part of the question, thank you Commented Apr 16, 2024 at 12:15

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.