Skip to content

Releases: explosion/thinc

v7.4.6: Updates for Python 3.10 and 3.11

18 Oct 13:20
ce9b324
Compare
Choose a tag to compare

✨ New features and improvements

  • Updates for Python 3.10 and 3.11 (#791):
    • Update vendored wrapt to v1.14.1.
    • Update dev requirements.
    • Add wheels for Python 3.10 and 3.11.

👥 Contributors

@adrianeboyd, @honnibal, @ines

v8.1.4: Type fixes

12 Oct 16:12
aebb8e4
Compare
Choose a tag to compare

🔴 Bug fixes

  • Fix issue #785: Revert change to return type for Ops.alloc from #779.

👥 Contributors

@adrianeboyd, @honnibal, @ines, @svlandeg

v8.1.3: Updates for pydantic and mypy

07 Oct 11:35
36b691f
Compare
Choose a tag to compare

✨ New features and improvements

  • Extend pydantic support to v1.10.x (#778).
  • Support mypy 0.98x, drop mypy support for Python 3.6 (#776).

🔴 Bug fixes

  • Fix issue #775: Fix fix_random_seed entry point in setup.cfg.

👥 Contributors

@adrianeboyd, @honnibal, @ines, @pawamoy, @svlandeg

v8.1.2: Update blis support and CuPy extras

27 Sep 09:44
d9c40cf
Compare
Choose a tag to compare

✨ New features and improvements

  • Update CuPy extras to add cuda116, cuda117, cuda11x and cuda-autodetect, which uses the new cupy-wheel package (#740).
  • Add a pytest-randomly entry point for fix_random_seed (#748).

🔴 Bug fixes

  • Fix issue #772: Restrict supported blis versions to ~=0.7.8 to avoid bugs in BLIS 0.9.0.

👥 Contributors

@adrianeboyd, @honnibal, @ines, @rmitsch, @svlandeg, @willfrey

v8.1.1: Use confection, new layers and bugfixes

09 Sep 14:32
9836e9e
Compare
Choose a tag to compare

✨ New features and improvements

🔴 Bug fixes

  • Fix issue #720: Improve type inference by replacing FloatsType in Ops by a TypeVar.
  • Fix issue #739: Fix typing of Ops.asarrayDf methods.
  • Fix issue #757: Improve compatibility with supported Tensorflow versions.

👥 Contributors

@adrianeboyd, @cclauss, @danieldk, @honnibal, @ines, @kadarakos, @polm, @rmitsch, @shadeMe

v8.1.0: Updated types and many Ops improvements

08 Jul 13:54
17846c4
Compare
Choose a tag to compare

✨ New features and improvements

  • Added support for mypy 0.950 and pydantic v1.9.0, added bound types throughout layers and ops (#599).
  • Made all NumpyOps CPU kernels generic (#627).
  • Made all custom CUDA kernels generic (#603).
  • Added bounds checks for NumpyOps (#618).
  • Fixed out-of-bounds writes in NumpyOps and CupyOps (#664).
  • Reduced unnecessary zero-init allocations (#632).
  • Fixed reductions when applied to zero-length sequences (#637).
  • Added NumpyOps.cblas to get a table of C BLAS functions (#643, #700).
  • Improved type-casting in NumpyOps.asarray (#656).
  • Simplified CupyOps.asarray (#661).
  • Fixed Model.copy() for layers used more than once (#659).
  • Fixed potential race in Shim (#677).
  • Convert numpy arrays using dlpack in xp2tensorflow and xp2torch when possible (#686).
  • Improved speed of HashEmbed by avoiding large temporary arrays (#696).
  • Added Ops.reduce_last and Ops.reduce_first (#710).
  • Numerous test suite improvements.
  • Experimental: Add support for Metal Performance Shaders with PyTorch nightlies (#685).

🔴 Bug fixes

  • Fix issue #707: Fix label smoothing threshold for to_categorical.

⚠️ Backwards incompatibilities

  • In most cases the typing updates allow many casts and ignores to be removed, but types may also need minor modifications following the updates for mypy and pydantic.

  • get_array_module now returns None for non-numpy/cupy array input rather than returning numpy by default.

  • The prefer_gpu and require_gpu functions no longer set the default PyTorch torch.Tensor type to torch.cuda.FloatTensor. This means that wrapped PyTorch models cannot assume that Tensors are allocated on a CUDA GPU after calling these functions. For example:

    # Before Thinc v8.1.0, this Tensor would be allocated on the GPU after
    # {prefer,require}_gpu. Now it will be allocated as a CPU tensor by default.
    token_mask = torch.arange(max_seq_len)
    
    # To ensure correct allocation, specify the device where the Tensor should be allocated. 
    # `input` refers to the input of the model. 
    token_mask = torch.arange(max_seq_len, device=input.device) 
    

    This change brings Thinc's behavior in line with how device memory allocation is normally handled in PyTorch.

👥 Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @koaning, @richardpaulhudson, @shadeMe, @svlandeg

v8.0.17: Extended requirements, test suite fixes

02 Jun 14:05
87865be
Compare
Choose a tag to compare

✨ New features and improvements

  • Extend support for typing_extensions up to v4.1.x (for Python 3.7 and earlier).
  • Various fixes in the test suite.

👥 Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @shadeMe

v8.0.16: Bug fixes

19 May 11:42
Compare
Choose a tag to compare

✨ New features and improvements

🔴 Bug fixes

  • Fix issue #624: Support CPU inference for models trained with gradient scaling.
  • Fix issue #633: Fix invalid indexing in Beam when no states have valid transitions.
  • Fix issue #639: Improve PyTorch Tensor handling in CupyOps.asarray.
  • Fix issue #649: Clamp inputs in Ops.sigmoid to prevent overflow.
  • Fix issue #651: Fix type safety issue with model ID assignment.
  • Fix issue #653: Correctly handle Tensorflow GPU tensors in tests.
  • Fix issue #660: Make is_torch_array work without PyTorch installed.
  • Fix issue #664: Fix out of-bounds writes in CupyOps.adam and NumpyOps.adam.

⚠️ Backwards incompatibilities

  • The init implementations for layers no longer return Model.

📖 Documentation and examples

👥 Contributors

@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @koaning, @notplus, @richardpaulhudson, @shadeMe

v8.0.15: Fix compatibility with older PyTorch versions

15 Mar 13:36
Compare
Choose a tag to compare

🔴 Bug fixes

  • Fix issue #610: Improve compatibility with PyTorch versions before v1.9.0.

👥 Contributors

@adrianeboyd, @danieldk

v8.0.14: New activation functions, bug fixes and more

14 Mar 08:16
Compare
Choose a tag to compare

✨ New features and improvements

🔴 Bug fixes

  • Fix issue #552: Do not backpropagate Inf/NaN out of PyTorch layers when using mixed-precision training.
  • Fix issue #578: Correctly cast the threshold argument of CupyOps.mish and correct an equation in Ops.backprop_mish.
  • Fix issue #587: Correct invariant checks in CategoricalCrossentropy.get_grad.
  • Fix issue #592: Update murmurhashrequirement.
  • Fix issue #594: Do not sort positional arguments in Config.

⚠️ Backwards incompatibilities

  • The out keyword argument of Ops.mish and Ops.backprop_mish is replaced by inplace for consistency with other activations.

📖Documentation and examples

👥 Contributors

@adrianeboyd, @andrewsi-z, @danieldk, @honnibal, @ines, @Jette16, @kadarakos, @kianmeng, @polm, @svlandeg, @thatbudakguy