Skip to content

Latest commit

 

History

History
31 lines (22 loc) · 1.61 KB

20250305.md

File metadata and controls

31 lines (22 loc) · 1.61 KB

Information

  • Time: 13:30 -
  • Attendees: Bob Zhang (Supervisor), Huang Yanzhen, Mai Jiajun

Discussion Summary

✨ What we've done

  • We applied per-channel normalization for data samples.
  • We added a batch-normalization layer after all the convolution layers (before activation).
  • We replaced the original MaxPool3D layer with our custom ResPool layer, which added the batch-normed data back to the original input (so we named it the Residual Pooling layer). Unlike max-polling in 3D that loses 87.5% of the original information, ResPool only emphasized the local max from the original, boosting the model's learning performance.
  • We removed the 4th convolution layer (C=32 -> C=32) and added a fully-connected layer. This prevents over-fitting.

👍 What's improved

  • Training: 200 epoch - Min validation loss = 19.0367 => 25 epoch - Min validation loss = 11.0318
  • Application: No more after-detection bias needed.

👀 Performance

Training Performance

Application Performance - Average computation time per frame (seconds)