Skip to content

Releases: AminRezaei0x443/memory-efficient-attention

0.1.3

27 Mar 13:08
Compare
Choose a tag to compare
update: bump version

0.1.2

07 Mar 18:48
dccd556
Compare
Choose a tag to compare

What's Changed

This update fixes torch device handling issues in code. GPU and other kinds of tensors can be used safely.

New Contributors

  • @yhgon made their first contribution in #5

Full Changelog: 0.1.1.0...0.1.2

0.1.1

03 Feb 16:08
f0add8e
Compare
Choose a tag to compare

Added mask, bias calculation functions for custom and memory efficient chunks computation. So now sublinear memory computation mask, bias are possible.

Full Changelog: 0.1.1...0.1.1.0

0.1.0

18 Dec 20:37
Compare
Choose a tag to compare
feature: github publish workflow