-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The memory usage skyrocket each time it saves #70
Comments
Seems to occur in the calls to if minMeanError == nil or errors:mean() < minMeanError then
print("\n(Saving model ...)")
params, gradParams = nil,nil
collectgarbage()
torch.save("data/model.t7", model)
collectgarbage()
minMeanError = errors:mean()
end I then added |
Thanks for your simple solution, @Namburgesas . Hope there's a fix in the future |
Did you try doing clearState() before using model:float(). It clears the intermediary states in the model (not needed for prediction) |
And it doesn't free the memory. I executed this bash command
First epoch it consumed 2GiB Ram, and second it consumed 5GiB, then 10GiB and finally my memory was full at 11th epoch. (My computer have 32 GiB of ram)
This issue disappeared when I commented out line 156 to 171 in train.lua(The ram usage is always at 1.2GiB)
So I conclude the saving process may be the problem
The text was updated successfully, but these errors were encountered: