Skip to content

Releases: explosion/thinc

v6.5.0: Supervised similarity, fancier embedding and improvements to linear model

11 Mar 18:36
Compare
Choose a tag to compare

✨ Major features and improvements

  • Improve GPU support.
  • Add classes for siamese neural network architectures for supervised similarity.
  • Add HashEmbed class, an embedding layer which uses the hashing trick to support a larger vocabulary in a shorter table.
  • Add support for distinct feature columns in the Embed class.

🔴 Bug fixes

  • Fix model averaging for linear model.
  • Fix resume_training() method for linear model.
  • Fix L1 penalty for linear model.

📖 Documentation and examples

v6.3.0: Efficiency improvements, argument checking and error messaging

25 Jan 09:34
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Add thinc.check module to specify argument constraints for functions and methods.
  • NEW: Add thinc.exceptions module with custom exception messaging.
  • Add LSUV initialisation.
  • Add averaged parameters, for reduced hyper-parameter sensitivity.
  • Improve efficiency of maxout, window extraction and dropout.

📋 Tests

  • Reorganise and improve tests.
  • Reach 100% coverage over the entire package.

v6.2.0: Improve API and introduce overloaded operators

15 Jan 02:32
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Model now has define_operators() classmethod to overload operators for a given block.
  • Add chain(), clone() and concatenate() functions for use with overloaded operators.
  • Add describe module which provides class decorators for defining new layers.
  • Allow layers to calculate input and output sizes based on training data.

Together, these features allow very concise model definitions:

with Model.define_operators({'**': clone, '>>': chain}):
    model = BatchNorm(ReLu(width)) ** depth >> Softmax()

⚠️ Backwards incompatibilities

  • Major revisions to previously undocumented neural network APIs (see above).

📋 Tests

  • Reorganise and improve tests for neural network functions.
  • Reach 100% coverage over the current neural network classes.

v6.1.3: More neural network functions and training continuation

09 Jan 23:46
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Add several useful higher-order functions, including @layerize and @metalayerize decorators to turn functions into weightless layers.
  • NEW: Add batch normalization layer.
  • NEW: Add residual layer using pre-activation approach.
  • Simplify model setup and initialization.
  • Add ELU layer.

🔴 Bug fixes

  • The AveragedPerceptron class can now continue training after model loading. Previously, the weights were zeroed for each feature as soon as it was updated. This affected spaCy users, especially those adding new classes to the named entity recognizer.

📖 Documentation and examples

v6.0.0: Add thinc.neural for NLP-oriented deep learning

31 Dec 00:14
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Add thinc.neural to develop neural networks for spaCy.
  • Introduce support for Affine, Maxout, ReLu and Softmax vector-to-vector layers.
  • Introduce support for efficient static word embedding layer with projection matrix and per-word-type memoisation.
  • Introduce support for efficient word vector convolution layer, which also supports per-word-type memoisation.
  • Introduce support for MeanPooling, MaxPooling and MinPooling. Add MultiPooling layer for concatenative pooling.
  • Introduce support for annealed dropout training.
  • Introduce support for classical momentum, Adam and Eve optimisers.
  • Introduce support for averaged parameters for each optimiser.

⚠️ Backwards incompatibilities

The Example class now holds a pointer to its ExampleC struct, where previously it held the struct value. This introduces a small backwards incompatibility in spaCy.