Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute likelihood via particle filter #14

Merged
merged 4 commits into from
May 21, 2024
Merged

Compute likelihood via particle filter #14

merged 4 commits into from
May 21, 2024

Conversation

richfitz
Copy link
Member

@richfitz richfitz commented May 17, 2024

Merge after #8, contains those commits

We'll get better statistical tests on this later, but some poking about with it suggests we're correct :-/

For the future tests, we have a model where we derived the likelihood a different way and show it converges. But for that I need to implement support for compiling models as we go, and that's a few PRs away.

This PR implements the same particle filter algorithm as used in dust1 and mcstate1:

  1. advance the model up to the next data point
  2. compare particles to the data
  3. convert the log likelihoods into a set of weights (this requires moving out of the log basis into a normal basis, which is half of the gross bookkeeping - scale_log_weights)
  4. resample using the "systematic sampling" scheme (this is the other half of gross bookkeeping, see resample_weight - tbh I forget how this works but the R reference implementation in test-filter-details.R is actually very simple). This samples, with replacement, particles in proportion to the weight computed in step 3
  5. accumulate the log likelihood which is the log of the average likelihood (not the average log likelihood) over each step

Some future features are flagged in the code

  • early exit for filter invocations that will not be accepted
  • save trajectories as the filter proceeds
  • save snapshots at some time points in the simulation

The RNG seeding is complex, and different to the way that we do this in dust1:

  • every filter gets its own rng, with the aim that a filter that computes multiple parameter sets at once could split apart. This frees us up to parallelise in different ways (e.g., openmp, mpi or something else). This means for n particles and m parameter sets we need (n + 1) * m streams
  • we will pull these from the rng state by jumping so that the first n + 1 streams go to the first parameter sets, and of that the first stream goes to the filter and the remaining n go to the model
  • I've written a state extraction routine which checks this, and a test that shows you get the same answer running a filter with multiple parameter sets as a set of filters seeded appropriately

@richfitz richfitz marked this pull request as ready for review May 20, 2024 07:57
@richfitz richfitz requested a review from weshinsley May 20, 2024 07:57
Copy link
Contributor

@weshinsley weshinsley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked through, and think I get it...

@weshinsley weshinsley merged commit 49624b8 into main May 21, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants