-
Notifications
You must be signed in to change notification settings - Fork 7
Add MMD verification in CT Hammer #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1155b09
to
f774a3a
Compare
f774a3a
to
6dc37e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy with the shape of this, so nicely done. There are a few issues around parallelism that give me some concern though, so PTAL at my comments. We can discuss 1:1 if you want more details.
internal/hammer/loadtest/workers.go
Outdated
} | ||
|
||
// entryFromChain generates an Entry from a chain and timestamp. | ||
// copied from certificate-transparency-go/serialization.go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to keep track of things, can you add a comment / TODO to reflect that it comes form handlers.go?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the comment.
internal/hammer/loadtest/workers.go
Outdated
// Verify MMD timestamp. | ||
sec := int64(leafMMD.timestamp / 1000) | ||
nsec := int64((leafMMD.timestamp % 1000) * 1000000) | ||
if time.Unix(sec, nsec).Add(v.mmdDuration).Before(time.Now()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can use time.UnixMilli
directly with leafMMD.timestamp
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, updated.
internal/hammer/loadtest/workers.go
Outdated
@@ -258,8 +258,7 @@ func (w *LogWriter) Run(ctx context.Context) { | |||
select { | |||
case w.leafMMDChan <- LeafMMD{chain, index, timestamp}: | |||
default: | |||
// Drop if leafMMDChan is full. This could happen if the number | |||
// of MMD verifiers is smaller than the number of writers. | |||
// Drop if leafMMDChan is full. This could happen if the MMD verifiers are falling behind. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you log something to make sure this doesn't happen too silently? Either here, or at read time if this is more convenient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the log when the leafMMDChan
is full.
internal/hammer/loadtest/hammer.go
Outdated
} | ||
|
||
func NewHammer(tracker *client.LogStateTracker, f client.EntryBundleFetcherFunc, w LeafWriter, gen func() []byte, seqLeafChan chan<- LeafTime, errChan chan<- error, opts HammerOpts) *Hammer { | ||
readThrottle := NewThrottle(opts.MaxReadOpsPerSecond) | ||
writeThrottle := NewThrottle(opts.MaxWriteOpsPerSecond) | ||
|
||
leafMMDChan := make(chan LeafMMD, opts.NumWriters) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be higher? It looks to me that this channel will get full, new leafMMD
won't be added to it, and this behavior would go silent. As we spoke about, logging a message each time a leafMMD
would spam logs when MMDVerifiers
are not enabled. I think that would be fine provided that the hammer should always have them on. Alternatively, you could compare the length and capacity of the channel at read time to tell whether or not it's full, and indicate to the operator that they might want to increase the channel size?
If there aren't enough MMDVerifier
workers (which could be the case because integration is slower than sequencing) and the channel gets full, you might end up always verifying MMDs after MMD+X time has passed, and therefore never detect if there is an MMD violation smaller between [0;X]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bumped the leafMMDChan
buffer size to a huge size. I've stop logging when the number of MMD verifiers is zero.
randomReaders := NewWorkerPool(func() Worker { | ||
return NewLeafReader(tracker, f, RandomNextLeaf(), readThrottle.TokenChan, errChan) | ||
}) | ||
fullReaders := NewWorkerPool(func() Worker { | ||
return NewLeafReader(tracker, f, MonotonicallyIncreasingNextLeaf(), readThrottle.TokenChan, errChan) | ||
}) | ||
writers := NewWorkerPool(func() Worker { | ||
return NewLogWriter(w, gen, writeThrottle.TokenChan, errChan, seqLeafChan) | ||
return NewLogWriter(w, gen, writeThrottle.TokenChan, errChan, seqLeafChan, leafMMDChan) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a followup PR, you could also add a leafMMDChan to randomReaders
. MMDVerifier
does two things really:
- checks that
input
matches the hases in the tree - make sure that 1. passes within the MMD
From writer
, input
is what the hammer has generated - and that's great! After that check, it would still be possible for leaf bundles and L0 hashes not to match.
From reader
, input
would be leaf bundles read from the tree. Leaf bundles and L0 hashes must match for this test to pass.
} | ||
index, err := parseAddChainResponse(body) | ||
index, timestamp, err := parseAddChainResponse(body) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe in a followup PR, you could check here that the SCT matches with the request.
… worker unexpectedly
Towards #112, #113.