-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Biaffiane parser not works with toc2vec backend #10527
Comments
Thanks for the report! The first error occurs because CPUs don't support mixed-precision. You could set both instances of
to
in the configuration. Admittedly, this is not very convenient, I'll look into disabling mixed-precision altogether when running on CPU, that would probably be nicer than the current assertion. I have to look into the second error in more detail, though it seems that the trace is not completely pasted? Fair warning ahead: the accuracy of the biaffine parser is not great yet with a convolutional tok2vec layer. I am currently also working on a set of changes that also improve accuracy when training a transformer model quite a bit. |
Closing as the first issue was addressed with explosion/thinc#624. If you still run into issues with the second error, feel free to open a new issue with the full stack trace! |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
I applied the experimental Biaffine parser based on this example and it works well when I use the transformer based architecture, but I got the following error when I tried to apply it with a toc2vec model by using cpu:
And I got this error with GPU:
How to reproduce the behaviour
I used the following config:
Your Environment
The text was updated successfully, but these errors were encountered: