-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Retrain transformer-based models in allennlp_models.pretrained #4457
Comments
I started doing BERT SRL too because I already had the environment for that spun up. |
@AkshitaB, when you touch the |
I see BERT SRL checked here, but there's this issue that says that performance actually doesn't match, with a solution that seems to fix it: #4392 (comment). Have you checked performance against the original reported performance? Seems like a simple config file fix to get performance back up, if this is an issue. I was just looking at the SRL and BERT SRL models, though, and I think we probably can just combine them at this point. I don't think we gain much by using the |
The |
"transformer-native" means "I have to use transformers, and I can't even try using anything else." It kind of goes against the whole point of the abstractions that we have. |
While that's true, it's hard to argue in print for a solution that's more complicated and different from the standard if it doesn't also improve results. In principle the |
We're tracking the remaining issue in #4521. |
Updates from the new transformers/tokenizers release has broken some of these.
The text was updated successfully, but these errors were encountered: