3. Tuning time-step for integration approximator #9
Replies: 1 comment
-
Tom commented: The classic test to perform is to halve (or double) the time step until changes in the model trajectory are negligible. I think direct trajectory comparisons have far more power to discriminate than model-data comparisons. If the data's noisy, simulation precision could be quite bad before the signal would emerge from the noise. If the model is stochastic, or has some discrete events with sensitive thresholds, the dt/2 strategy doesn't necessarily work. However, I'd still be more interested in statistical model-model comparisons than model-data comparisons, because you can compare every state variable, not just the few you have data for. If you have very high data coverage (rare in SD), either approach would work. Whether the model is continuous or discrete/stochastic, extreme conditions tests are really important here. |
Beta Was this translation helpful? Give feedback.
-
I believe this time step is the modeling choice. i.e. timestep is the output (which makes sense as we decrease it until outcomes are stabilized). What is input? I think precision of our decision should play its role but in what format and how to elicit modeler's tolerance, I don't know. This vignette on designing importance sampling for ode solver illustrates:
"Finally we can study whether the solver we used during MCMC (midpoint(2)) was accurate enough. This is done by solving the system using increasingly more numbers of steps in the solver, and studying different metrics computed using the ODE solutions and corresponding likelihood values."
solvers <- midpoint_list(c(4,6,8,10,12,14,16,18))
rel <- sho_fit_post$reliability(solvers = solvers)
print(rel$metrics)
#> pareto_k n_eff r_eff mad_loglik mad_odesol
#> 1 0.1176621 2568.697 0.6551239 1.151924 0.05033225
#> 2 0.1163435 2516.645 0.6481001 1.383171 0.05954904
#> 3 0.1191357 2496.489 0.6455019 1.465447 0.06276090
#> 4 0.1168255 2486.756 0.6442629 1.503758 0.06424357
#> 5 0.1184567 2481.337 0.6435767 1.524628 0.06504748
#> 6 0.1155186 2478.024 0.6431573 1.537231 0.06553156
#> 7 0.1143430 2475.851 0.6428822 1.545418 0.06584543
#> 8 0.1130321 2474.350 0.6426921 1.551034 0.06606043
"The mad_odesol and mad_loglik are the maximum absolute difference in the ODE solutions and log likelihood, respectively, over all MCMC draws. The former is denoted MAE in Timonen et. al (2022). Please refer to that paper in order to interpret the pareto_k column. Briefly we note that the Pareto-k values seem to be converging to a value smaller than 0.5, meaning that importance sampling is possible and we don’t need to run MCMC again."
In summary, just as euler integration with fine-grained timestep can approximate continuous function more closely than rk_45 method (i.e. timestep is everything), I feel some "reverse-engineering" action is needed.
Beta Was this translation helpful? Give feedback.
All reactions