Pretrained Transformers as Universal Computation Engines

Authors: Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch

License: CC BY 4.0

Abstract: We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language improves performance and compute efficiency on non-language downstream tasks. In particular, we find that such pretraining enables FPT to generalize in zero-shot to these modalities, matching the performance of a transformer fully trained on these tasks.

Submitted to arXiv on 09 Mar. 2021

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.
Kevin Lu et al.Haokun Liu et al.Vithursan Thangarasa et al.Yisheng Song et al.Reiner Pope et al.Yujia Zhai et al.Ming Jin et al.Wang Xue et al.Zihang Dai et al.Elias Frantar et al.Hyung Won Chung et al. Adaptive Agent Team et al.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.