Skip to content
This repository was archived by the owner on Jul 6, 2021. It is now read-only.

some tricks or questions for training model #35

Open
A-Rain opened this issue Mar 18, 2020 · 0 comments
Open

some tricks or questions for training model #35

A-Rain opened this issue Mar 18, 2020 · 0 comments

Comments

@A-Rain
Copy link

A-Rain commented Mar 18, 2020

this repository is really remarkable! Recently I'm also trying to re-implement the pointer-generator and transformer-based summarization model. And their are some issues that I want to discuss with you. I try to trainng CNNDM with transoformer(Attention is all you need version):

  • I set batch_size=16, lr=0.15 with adagrad just as pointer generator. After training for 27 epoch, the training loss is 3.96 and dev loss is 3.95. Is this normal? And the loss is dereceasing really slow.
  • In decoding mode. Which decoding strategy do you think is better? topk-topp or beam-search?
  • Are you using label-smoothing is the loss function? I find that lots of similiar code using label-smoothing.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant