-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Never ending training #22
Comments
Set your wamup steps to 10 percent of the total number of iterations required . In my case 15,000 helped. But please check . |
I run the inference code ,but i don't know how to produce the summary. should i post the original story through the postman,so it will give back a summary??? |
Where exactly can I set that? |
In config.py you would have |
When I put low numbers for steps =10 , warm up steps = 10 , max eval=10 iteration is still going 150+ for epoch 0. Could you help clarifying how those numbers are interlinked. |
hello, I adopt the default setting and obtain ROUGE-1/2/L: 39.29/17.30/27.10. In fact the ROUGE-L result is terrible. I trained on 1 GPU for 3 days, total 17w steps with batch size = 32. |
I am following the default settings. But after the second epoch, it's taking too long. Does anyone else happen to face the same problem? |
I'm running your code on the CNN/Dailymail dataset.
However, training never end, displaying :
with X growing more and more. I waited a long time, then kill the process.
But now, when I run the inference code, produced summary is very bad. Example :
What did I do wrong ? Anyone in the same situation who succeed to fix the code ? (@Vibha111094)
The text was updated successfully, but these errors were encountered: