Docy Child


Transformer

Estimated reading: 1 minute 24 views

A model architecture at the core of most state of the art (SOTA) ML research. It is composed of multiple “attention” layers which learn which parts of the input data are the most important for a given task. Transformers started in language modeling, then expanded into computer vision, audio, and other modalities.

 

Disclaimer: All content is only for technology education & knowledge sharing purpose, from mentioned sources. There is no endorsement of any products or service. The names and logos of third party products and companies shown and used in the materials are the property of their respective owners and may also be trademarks.
CONTENTS
Share to...