Tuning hyper-parameters based on unified Bayesian and SD taxonomy #5
Replies: 2 comments 3 replies
-
The above table is for static Bayesian inference, but as SD (dynamic Bayesian inference) which has time-series format of data as both input and as generated, I extended the definition as follows. In Stats., Example of the above classification is as below: May I ask for some feedback @tomfid? |
Beta Was this translation helpful? Give feedback.
-
Code-wise steps and in/outputs for three Bayesian checks (functions) prior predictive check (
|
Beta Was this translation helpful? Give feedback.
-
Goal
to figure out the best way to tune the following hyper-parameters and its useful resource. I left the link for each discussion (#12, #7, #9, #11) as they need active development, so please leave comment in each link! If you think of any other tunable parmameters, please leave a comment so that I can allocate a new discussion thread for that.
1. Typify model into PA or PAD based on the research purpose in #12
2. Typify parameters to assumed or assumed time-series or estimated for testing in #7
3. Tuning time-step for integration approximator in #9
4. Tuning window, the number of time-step, for behavior classifier (or pattern recognition) in #11
1~4 are all tuning, the only difference being the existence of pre-determined category or range; in 1 and 2, we are classifying into two or three buckets whereas for 3 and 4 we have larger number of choices, infinite in fact.
Another name for the above is, 1: including the real data or not (theoretical vs empirical), 2: parameter of interest or influence, 3: implementation of approximator, 4: resolution of classifier. All notation will follow (from the above paper in ###1):
Beta Was this translation helpful? Give feedback.
All reactions