Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

v1.1.0

Compare
Choose a tag to compare
@epwalsh epwalsh released this 08 Sep 20:50

What's new since version 1.0.0

Added

  • Added regression tests for training configs that run on a scheduled workflow.
  • Added a test for the pretrained sentiment analysis model.
  • Added way for questions from quora dataset to be concatenated like the sequences in the SNLI dataset.
  • Added BART model
  • Added ModelCard and related classes. Added model cards for all the pretrained models.
  • Added a field registered_predictor_name to ModelCard.
  • Added a method load_predictor to allennlp_models.pretrained.
  • Added support to multi-layer decoder in simple seq2seq model.
  • Added two models for fine-grained NER
  • Added a category for multiple choice models, including a few reference implementations

Changed

  • CopyNetDatasetReader no longer automatically adds START_TOKEN and END_TOKEN to the tokenized source. If you want these in the tokenized source, it's up to the source tokenizer.
  • Implemented manual distributed sharding for the SNLI dataset reader.

Fixed

  • Fixed evaluation of metrics when using distributed setting.
  • Fixed a bug introduced in 1.0 where the SRL model did not reproduce the original result.
  • Fixed GraphParser.get_metrics so that it expects a dict from F1Measure.get_metric.
  • CopyNet and SimpleSeq2Seq models now work with AMP.
  • Made the SST reader a little more strict in the kinds of input it accepts.
  • Updated the RoBERTa SST config to make proper use of the CLS token
  • Updated RoBERTa SNLI and MNLI pretrained models for latest transformers version
  • Updated the BERT SRL model to be compatible with the new huggingface tokenizers.
  • CopyNetSeq2Seq model now works with pretrained transformers.
  • A bug with NextTokenLM that caused simple gradient interpreters to fail.
  • A bug in training_config of qanet and bimpm that used the old version of regularizer and initializer.
  • The fine-grained NER transformer model did not survive an upgrade of the transformers library, but it is now fixed.
  • Fixed many minor formatting issues in docstrings. Docs are now published at https://docs.allennlp.org/models/.

Commits

5ffc207 Prepare for release v1.1.0
36ad6b3 Bump conllu from 4.0 to 4.1 (#126)
2c2e4e4 Fixes the BERT SRL model (#124)
44a3ca5 Update BartEncoder docstring for embeddings guidance (#127)
2e54449 Combinable quora sequences (#119)
ce8ef9b formatting changes for new version of black (#125)
3742b4a Distributed metrics (#123)
31b00e7 Bump markdown-include from 0.5.1 to 0.6.0 (#121)
c4ef4c8 Prepare for release v1.1.0rc4
27c0ca5 always run configs CI job (#118)
7c3be82 upgrade to actions cache v2 (#116)
dbd46b4 Add test for sentiment analysis (#117)
a91f009 run config tests in subprocesses (#115)
c211baf Update simple_seq2seq.py (#90)
267b747 Bump conllu from 3.1.1 to 4.0 (#114)
87570ec validate pretrained configs in CI (#112)
4fa5fc1 Fix RoBERTa SST (#110)
0491690 Only pin mkdocs-material to minor version, ignore specific patch version (#113)
8d27e7b prepare for release v1.1.0rc3
959a5eb Update graph parser metrics (#109)
45f85ce Bump mkdocs-material from 5.5.3 to 5.5.5 (#111)
e69f4c4 fix docs CI
4b96dfa tick version for nightly releases
e5f5c62 Update some models for AMP training (#104)
4f0bca1 Bump mkdocs-material from 5.5.2 to 5.5.3 (#108)
cbd2b57 Adds the pretrained BART model (#107)
5d9098f Bump conllu from 3.0 to 3.1.1 (#105)
92f2a8f Bump mkdocs-material from 5.5.0 to 5.5.2 (#106)
a901f9f Prepare for release v1.1.0rc2
04561a8 updates for torch 1.6 (#103)
e7b8247 Update RoBERTa SNLI/MNLI models (#102)
008828b Adding ModelCard (#98)
eaa331e Roberta SST (#99)
75c8869 Bump mkdocs-material from 5.4.0 to 5.5.0 (#100)
a56a103 Implemented BART (#35)
a730fed Cuda devices (#97)
4d0e090 Make sure we ship the SRL eval Perl script (#96)
4f2e316 tick version for nightly releases
dd60f94 Prepare for release v1.1.0rc1
da83a4e build and publish models docs (#91)
4b2178b implement manual distributed sharding for SNLI reader (#89)
1a2a8f4 Updates the SRL model (#93)
8a93743 Updated the fine-grained NER transformer model (#92)
b913333 Multiple Choice (#75)
09395d2 updates for new transformers release (#88)
11c6814 fix the bug of bimpm and update CHANGELOG (#87)
a735ddd Fix the regularizer of QANet model (#86)
0ce14da fixes for next_token_lm (#85)
37136f8 skip docker build on nightly release
82aa9ac Fine grained NER (#84)
4b5b939 fix test fixture
947beb0 remove unused param in copynet reader
9ec65df fix nightly Docker workflow
3019a4e fix workflow skip conditions
cc60ab9 use small dummy transformer for copynet test (#83)
ac9f214 fix nightly workflow
596e6a7 Bump mypy from 0.781 to 0.782 (#82)
d210c2f add nightly releases (#81)
935a2a8 dont add START and END tokens to source in CopyNet (#79)
d6798ce update skip conditions on const-parser-test (#80)
2754f88 Bump mypy from 0.780 to 0.781 (#78)
d3588ad Make CopyNet work with pretrained transformer models (#76)