Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training NER fails #4575

Closed
ghost opened this issue Nov 2, 2019 · 2 comments
Closed

Training NER fails #4575

ghost opened this issue Nov 2, 2019 · 2 comments
Labels
feat / cli Feature: Command-line interface training Training and updating models

Comments

@ghost
Copy link

ghost commented Nov 2, 2019

How to reproduce the behaviour

I am trying to train spacy on CoNLL 2003 NER data. I used the CLI convertor to convert from conll to spacy format.

python -m spacy convert eng.train . -c ner for all 3 files: train, dev and test and renamed the files to .json

To train, I am using this command:
python -m spacy train en scner/ engtrain.json engtesta.json -p ner -VV -D

Itn  NER Loss   NER P   NER R   NER F   Token %  CPU WPS
---  ---------  ------  ------  ------  -------  -------
✔ Saved model to output directory
scner3/model-final

I am getting this error:

Traceback (most recent call last):
  File "/home/su/.env/lib/python3.5/site-packages/spacy/cli/train.py", line 397, in train
    scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
  File "/home/su/.env/lib/python3.5/site-packages/spacy/language.py", line 672, in evaluate
    docs, golds = zip(*docs_golds)
ValueError: not enough values to unpack (expected 2, got 0)

Running the debug-data command
python3 -m spacy debug-data en engtrain.json engtesta.json

gives me this:

=========================== Data format validation ===========================
✔ Corpus is loadable

=============================== Training stats ===============================
Training pipeline: tagger, parser, ner
Starting with blank model 'en'
946 training docs
0 evaluation docs
✔ No overlap between training and evaluation data
⚠ Low number of examples to train from a blank model (946)

============================== Vocab & Vectors ==============================
ℹ 203621 total words in the data (23623 unique)
ℹ No word vectors present in the model

========================== Named Entity Recognition ==========================
ℹ 4 new labels, 0 existing labels
0 missing values (tokens with '-' label)
✔ Good amount of examples for all labels
✔ Examples without occurrences available for all labels
✔ No entities consisting of or starting/ending with whitespace

=========================== Part-of-speech Tagging ===========================
ℹ 45 labels in data (57 labels in tag map)
✘ Label ')' not found in tag map for language 'en'
✘ Label '(' not found in tag map for language 'en'
✘ Label 'NN|SYM' not found in tag map for language 'en'
✘ Label '"' not found in tag map for language 'en'

============================= Dependency Parsing =============================
ℹ Found 203621 sentences with an average length of 1.0 words.
ℹ 1 label in train data
ℹ 1 label in projectivized train data

================================== Summary ==================================
✔ 5 checks passed
⚠ 2 warnings
✘ 4 errors

Your Environment

  • Models: en_core_web_lg
  • Python version: 3.5.2
  • Platform: Linux-4.15.0-58-generic-x86_64-with-Ubuntu-16.04-xenial
  • spaCy version: 2.2.2

I am trying to train the model only on NER, not on POS.

@adrianeboyd adrianeboyd added feat / cli Feature: Command-line interface training Training and updating models labels Nov 2, 2019
@adrianeboyd
Copy link
Contributor

If the conversion with spacy convert works correctly you shouldn't have to rename the files to .json, so maybe something isn't configured correctly there. It should be the default, but you can also try explicitly adding -t json.

The NER summary from debug-data looks okay, but this is your immediate problem:

0 evaluation docs

There should be a clear error in debug-data for this case, since it will cause the problem with the evaluation above.

@ines ines closed this as completed Nov 2, 2019
adrianeboyd pushed a commit to adrianeboyd/spaCy that referenced this issue Nov 13, 2019
ines pushed a commit that referenced this issue Nov 13, 2019
* Add error in debug-data if no dev docs are available (see #4575)

* Update debug-data for GoldCorpus / Example

* Ignore None label in misaligned NER data
honnibal pushed a commit that referenced this issue Nov 13, 2019
* Generalize handling of tokenizer special cases

Handle tokenizer special cases more generally by using the Matcher
internally to match special cases after the affix/token_match
tokenization is complete.

Instead of only matching special cases while processing balanced or
nearly balanced prefixes and suffixes, this recognizes special cases in
a wider range of contexts:

* Allows arbitrary numbers of prefixes/affixes around special cases
* Allows special cases separated by infixes

Existing tests/settings that couldn't be preserved as before:

* The emoticon '")' is no longer a supported special case
* The emoticon ':)' in "example:)" is a false positive again

When merged with #4258 (or the relevant cache bugfix), the affix and
token_match properties should be modified to flush and reload all
special cases to use the updated internal tokenization with the Matcher.

* Remove accidentally added test case

* Really remove accidentally added test

* Reload special cases when necessary

Reload special cases when affixes or token_match are modified. Skip
reloading during initialization.

* Update error code number

* Fix offset and whitespace in Matcher special cases

* Fix offset bugs when merging and splitting tokens
* Set final whitespace on final token in inserted special case

* Improve cache flushing in tokenizer

* Separate cache and specials memory (temporarily)
* Flush cache when adding special cases
* Repeated `self._cache = PreshMap()` and `self._specials = PreshMap()`
are necessary due to this bug:
explosion/preshed#21

* Remove reinitialized PreshMaps on cache flush

* Update UD bin scripts

* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)

* Use special Matcher only for cases with affixes

* Reinsert specials cache checks during normal tokenization for special
cases as much as possible
  * Additionally include specials cache checks while splitting on infixes
  * Since the special Matcher needs consistent affix-only tokenization
    for the special cases themselves, introduce the argument
    `with_special_cases` in order to do tokenization with or without
    specials cache checks
* After normal tokenization, postprocess with special cases Matcher for
special cases containing affixes

* Replace PhraseMatcher with Aho-Corasick

Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays
of the hash values for the relevant attribute. The implementation is
based on FlashText.

The speed should be similar to the previous PhraseMatcher. It is now
possible to easily remove match IDs and matches don't go missing with
large keyword lists / vocabularies.

Fixes #4308.

* Restore support for pickling

* Fix internal keyword add/remove for numpy arrays

* Add test for #4248, clean up test

* Improve efficiency of special cases handling

* Use PhraseMatcher instead of Matcher
* Improve efficiency of merging/splitting special cases in document
  * Process merge/splits in one pass without repeated token shifting
  * Merge in place if no splits

* Update error message number

* Remove UD script modifications

Only used for timing/testing, should be a separate PR

* Remove final traces of UD script modifications

* Update UD bin scripts

* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)

* Add missing loop for match ID set in search loop

* Remove cruft in matching loop for partial matches

There was a bit of unnecessary code left over from FlashText in the
matching loop to handle partial token matches, which we don't have with
PhraseMatcher.

* Replace dict trie with MapStruct trie

* Fix how match ID hash is stored/added

* Update fix for match ID vocab

* Switch from map_get_unless_missing to map_get

* Switch from numpy array to Token.get_struct_attr

Access token attributes directly in Doc instead of making a copy of the
relevant values in a numpy array.

Add unsatisfactory warning for hash collision with reserved terminal
hash key. (Ideally it would change the reserved terminal hash and redo
the whole trie, but for now, I'm hoping there won't be collisions.)

* Restructure imports to export find_matches

* Implement full remove()

Remove unnecessary trie paths and free unused maps.

Parallel to Matcher, raise KeyError when attempting to remove a match ID
that has not been added.

* Switch to PhraseMatcher.find_matches

* Switch to local cdef functions for span filtering

* Switch special case reload threshold to variable

Refer to variable instead of hard-coded threshold

* Move more of special case retokenize to cdef nogil

Move as much of the special case retokenization to nogil as possible.

* Rewrap sort as stdsort for OS X

* Rewrap stdsort with specific types

* Switch to qsort

* Fix merge

* Improve cmp functions

* Fix realloc

* Fix realloc again

* Initialize span struct while retokenizing

* Temporarily skip retokenizing

* Revert "Move more of special case retokenize to cdef nogil"

This reverts commit 0b7e52c.

* Revert "Switch to qsort"

This reverts commit a98d71a.

* Fix specials check while caching

* Modify URL test with emoticons

The multiple suffix tests result in the emoticon `:>`, which is now
retokenized into one token as a special case after the suffixes are
split off.

* Refactor _apply_special_cases()

* Use cdef ints for span info used in multiple spots

* Modify _filter_special_spans() to prefer earlier

Parallel to #4414, modify _filter_special_spans() so that the earlier
span is preferred for overlapping spans of the same length.

* Replace MatchStruct with Entity

Replace MatchStruct with Entity since the existing Entity struct is
nearly identical.

* Replace Entity with more general SpanC

* Replace MatchStruct with SpanC

* Add error in debug-data if no dev docs are available (see #4575)

* Update azure-pipelines.yml

* Revert "Update azure-pipelines.yml"

This reverts commit ed1060c.

* Use latest wasabi

* Reorganise install_requires

* add dframcy to universe.json (#4580)

* Update universe.json [ci skip]

* Fix multiprocessing for as_tuples=True (#4582)

* Fix conllu script (#4579)

* force extensions to avoid clash between example scripts

* fix arg order and default file encoding

* add example config for conllu script

* newline

* move extension definitions to main function

* few more encodings fixes

* Add load_from_docbin example [ci skip]

TODO: upload the file somewhere

* Update README.md

* Add warnings about 3.8 (resolves #4593) [ci skip]

* Fixed typo: Added space between "recognize" and "various" (#4600)

* Fix DocBin.merge() example (#4599)

* Replace function registries with catalogue (#4584)

* Replace functions registries with catalogue

* Update __init__.py

* Fix test

* Revert unrelated flag [ci skip]

* Bugfix/dep matcher issue 4590 (#4601)

* add contributor agreement for prilopes

* add test for issue #4590

* fix on_match params for DependencyMacther (#4590)

* Minor updates to language example sentences (#4608)

* Add punctuation to Spanish example sentences

* Combine multilanguage examples for lang xx

* Add punctuation to nb examples

* Always realloc to a larger size

Avoid potential (unlikely) edge case and cymem error seen in #4604.

* Add error in debug-data if no dev docs are available (see #4575)

* Update debug-data for GoldCorpus / Example

* Ignore None label in misaligned NER data
@lock
Copy link

lock bot commented Dec 2, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Dec 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feat / cli Feature: Command-line interface training Training and updating models
Projects
None yet
Development

No branches or pull requests

2 participants