-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different ways of loading data model #684
Comments
Thanks, this was a bad bug! When I switched default support to the GloVe vectors in 1.0, I added a hack to the The version of the vectors loaded by >>> import spacy
>>> nlp = spacy.load('en')
>>> sum(w.has_vector for w in nlp.vocab)
645315 If you see a ~300,000, your model has the older vectors trained on Wikipedia loaded. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
I get different word vectors depending how I load the data model. For example:
My Environment
The text was updated successfully, but these errors were encountered: