Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in deserializing Spacy doc from disk #3468

Closed
sahelsoft opened this issue Mar 22, 2019 · 4 comments · Fixed by #3471
Closed

Error in deserializing Spacy doc from disk #3468

sahelsoft opened this issue Mar 22, 2019 · 4 comments · Fixed by #3471
Labels
bug Bugs and behaviour differing from documentation feat / doc Feature: Doc, Span and Token objects feat / serialize Feature: Serialization, saving and loading

Comments

@sahelsoft
Copy link

How to deserialize Spacy results from disk

I need to run an algorithm on a lot of text files. In order to pre-process them, I use Spacy which has pre-trained models in different languages. Since the pre-processed results are employed in different parts of the algorithm, it is better to save them on disk once and load them many times. However, the Spacy deserialization method makes an error. I wrote a simple code to show the error:

de_nlp=spacy.load("de_core_news_sm",disable=['ner', 'parser'])
de_nlp.add_pipe(de_nlp.create_pipe('sentencizer'))
doc = de_nlp(text_file_content)

for ix, sent in enumerate(doc.sents, 1):
    print("--Sentence number {}: {}".format(ix, sent))
    lemma = [w.lemma_ for w in sent]
    print(f"Lemma ==> {lemma}")

#Serialization and Deserilization 
doc.to_disk("/tmp/test_result.bin")
new_doc = Doc(Vocab()).from_disk("/tmp/test_result.bin")

for ix, sent in enumerate(new_doc.sents, 1):
    print("--Sentence number {}: {}".format(ix, sent))
    lemma = [w.lemma_ for w in sent]
    print(f"Lemma ==> {lemma}")

However, the above example code makes the following error:

Traceback (most recent call last): File "/tmp/test_result.bin", line 14, in <module> for ix, sent in enumerate(new_doc.sents, 1): File "doc.pyx", line 535, in __get__ ValueError: [E030] Sentence boundaries unset. You can add the 'sentencizer' component to the pipeline with: nlp.add_pipe(nlp.create_pipe('sentencizer')) Alternatively, add the dependency parser, or set sentence boundaries by setting doc[i].is_sent_start.

I tried to change the first two line to "de_nlp=spacy.load("de_core_news_sm")", but it still gives some other errors.
I really needs your help. Any information about this topic is appreciated.

My Environment

  • Operating System: Windows 10
  • Python Version Used: 3.6
  • spaCy Version Used: 2.0.18
  • Environment Information: PyCharm 2017
ines added a commit that referenced this issue Mar 23, 2019
@ines ines added bug Bugs and behaviour differing from documentation feat / serialize Feature: Serialization, saving and loading feat / doc Feature: Doc, Span and Token objects labels Mar 23, 2019
ines added a commit that referenced this issue Mar 23, 2019
Check for Token.is_sent_start first (which is serialized/deserialized correctly)
ines added a commit that referenced this issue Mar 23, 2019
@ines
Copy link
Member

ines commented Mar 23, 2019

Thanks for the report! I think what's going on here is that the Doc's is_sentenced property (which indicates whether sentence boundaries have been applied) comes back incorrectly when you load the Doc back in, so spaCy thinks the Doc doesn't have sentence boundaries.

As a quick workaround, you can do the following:

new_doc.is_parsed = True

It's a hack, but it should work. Basically, if the Doc is parsed (or spaCy thinks it is), Doc.is_sentenced will return True as well, so you can trick spaCy a little here.

@sahelsoft
Copy link
Author

Thank you. But it doesn't work. It makes some error while printing the content of new_doc.

@andrebm
Copy link

andrebm commented Mar 26, 2019

Same problem here! it takes 7 minutes to load.. strange indeed!

@lock
Copy link

lock bot commented Apr 25, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Apr 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Bugs and behaviour differing from documentation feat / doc Feature: Doc, Span and Token objects feat / serialize Feature: Serialization, saving and loading
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants