Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MT5ForConditionalGeneration has model.config.max_length=20 by default. Why? #16664

Closed
JoaoLages opened this issue Apr 8, 2022 · 9 comments
Closed
Assignees

Comments

@JoaoLages
Copy link
Contributor

JoaoLages commented Apr 8, 2022

  • transformers version: 4.6.1
  • Platform: Ubuntu 18
  • Python version: 3.6

I spent one week training a T5 model with this package and couldn't figure out why my sequences obtained with Trainer.evaluate were only yielding a maximum of 20 tokens. I sent the max_length argument to the tokenizer to encode the input/output.
After a long time I found out that this happens:

model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small')
model.config.max_length
Out: 20

The generate method was being used in Trainer because I used predict_with_generate=True.
Please change this behaviour, this was a very hard bug to find. model.config.max_length should be set to None by default, if the model does not have limitations.

@gante
Copy link
Member

gante commented Apr 11, 2022

Hi @JoaoLages 👋 Tagging @patrickvonplaten for the discussion.

Context for this discussion: We have had similar problems in the past, related to a small default max_length (e.g. #16622). We are now setting proper checks for the min_length arguments (it must be smaller than max_length, #16668).

The issue is a double-edged sword. With no default, we risk generating beyond the model's input size, which may be undesirable. If we set max_length to the model's input size by default, we avoid the previous problem but generate() will still likely need significant resources (compute and memory), which may be frustrating to new users. With a small default, we run into the problem you described.

I agree that the current state is not good enough, but increasing the default max_length may open a pandora's box of problems. Would it help if the console showed why generation stopped? (which is one of the following: max length reached, all sentences finished and early stopping was active, or no further score improvement was possible)

Alternatively, we can increase the default to the model's input size, and rely on early stopping to ensure generation does not drag for long by default -- WDYT @patrickvonplaten? (we could add a tqdm-like progress bar so users don't think the process died)

@JoaoLages
Copy link
Contributor Author

Would it help if the console showed why generation stopped?
Would still be hard to find why max_length was reached.

I still think it's best to always have max_length defined by default to the maximum value that the model can handle. A warning could be shown to know that this behaviour is in usage and that it may consume a lot of resources.

@patrickvonplaten
Copy link
Contributor

Sadly we cannot change this default anymore due to backward compatibility. Always having the model generate up to maximum allowed tokens can also be tricky - multiple models will always error out due to memory, some models like T5 have no max length really, ... so think we'll have to leave it at 20. Maybe we can improve the docs somehow

@JoaoLages
Copy link
Contributor Author

JoaoLages commented Apr 12, 2022

I have the idea that no one is using early stopping in these models because 'it doesn't work' and it is due to this silent behaviour 😅

@gante
Copy link
Member

gante commented Apr 12, 2022

These legacy choices are painful, but changing them also causes many issues for downstream users 💔

@JoaoLages I've added three actions points for me:

  1. Keep an eye on related issues, and bring up the discussion around this argument's default if this becomes a routine problem;
  2. Try to log the stopping criteria (as discussed above), which may help to raise awareness around this argument;
  3. Work on documentation, which I was going to anyway :D

Is there anything else you believe we can do to improve generate()?

@JoaoLages
Copy link
Contributor Author

Not really, generate works perfectly, it was just this silent small max_length setting :)
Some warnings/log messages would be better than nothing!

@gante
Copy link
Member

gante commented Apr 12, 2022

(going to keep the issue open to backlink in a future improved logging PR)

@gante gante self-assigned this Apr 12, 2022
@patrickvonplaten
Copy link
Contributor

People that are familiar with generate() should know that max_length can and should be overwritten. I'll try to make the docs better here, but I don't think we should add a warning as this will literally be shown everytime someone calls generate without defining max_length

@huggingface huggingface deleted a comment from github-actions bot May 15, 2022
@github-actions
Copy link

github-actions bot commented Jun 8, 2022

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants