Module fast_transformers
Provide a library with fast transformer implementations.
Expand source code
#
# Copyright (c) 2020 Idiap Research Institute, http://www.idiap.ch/
# Written by Angelos Katharopoulos <angelos.katharopoulos@idiap.ch>,
# Apoorv Vyas <avyas@idiap.ch>
#
"""Provide a library with fast transformer implementations."""
__author__ = "Angelos Katharopoulos, Apoorv Vyas"
__copyright__ = "Copyright (c) 2020 Idiap Research Institute"
__license__ = "MIT"
__maintainer__ = "Angelos Katharopoulos, Apoorv Vyas"
__email__ = "angelos.katharopoulos@idiap.ch, avyas@idiap.ch"
__url__ = "https://github.com/idiap/fast-transformers"
__version__ = "0.3.0"
Sub-modules
fast_transformers.aggregate
fast_transformers.attention
-
Implementations of different types of attention mechanisms.
fast_transformers.attention_registry
-
Allow for the dynamic registration of new attention implementations …
fast_transformers.bucket_product
fast_transformers.builders
-
This module implements builders that simplify building complex transformer architectures with different attention mechanisms …
fast_transformers.causal_product
fast_transformers.clustering
fast_transformers.events
-
This module implements a basic event system that allows the transformer internal components to make available any tensor with minimal overhead.
fast_transformers.feature_maps
-
Implementations of feature maps to be used with linear attention and causal linear attention.
fast_transformers.hashing
fast_transformers.local_product
fast_transformers.masking
-
Create types of masks to be used in various places in transformers …
fast_transformers.recurrent
-
Implementations of transformers as recurrent functions.
fast_transformers.sparse_product
fast_transformers.transformers
-
Implement transformer encoders and decoders that are going to be used with different attention mechanisms …
fast_transformers.utils
-
Boilerplate code for dealing with fast_transformers modules.
fast_transformers.weight_mapper
-
The weight mapper module provides a utility to load transformer model weights from other implementations to a fast_transformers model …