Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

Authors

  • Suchin Gururangan
  • Ana Marasović
  • Swabha Swayamdipta
  • Kyle Lo
  • Iz Beltagy
  • Doug Downey
  • Noah A. Smith
  • ACL
  • 2020
  • View in Semantic Scholar

Abstract

Language models pretrained on text from a wide variety of sources form the foundation of today’s NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the task’s unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multi-phase adaptive pretraining offers large gains in task performance.

1 Introduction

Today's pretrained language models are trained on massive, heterogeneous corpora (Raffel et al., 2019; Yang et al., 2019) . For instance, ROBERTA was trained on over 160GB of uncompressed text, with sources ranging from Englishlanguage encyclopedic and news articles, to literary works and web content. Representations learned by such models achieve strong performance across many tasks with datasets of varying sizes drawn from a variety of sources (e.g., Wang et al., 2018 . This leads us to ask whether a tasks textual domain-a term typically used to denote a distribution over language characterizing a given topic or genre (such as "science" or "mystery novels")-is still relevant. Do the latest large pretrained models work universally or is it still helpful to build Figure 1 : An illustration of data distributions. Task data is comprised of an observable task distribution, usually non-randomly sampled from a wider distribution (light grey ellipsis) within an even larger target domain, which is not necessarily one of the domains included in the original LM pretraining domain -though overlap is possible. We explore the benefits of continued pretraining on data from the task distribution and the domain distribution. separate pretrained models for specific domains?

Figure 1: An illustration of data distributions. Task data is comprised of an observable task distribution, usually non-randomly sampled from a wider distribution (light grey ellipsis) within an even larger target domain, which is not necessarily one of the domains included in the original LM pretraining domain – though overlap is possible. We explore the benefits of continued pretraining on data from the task distribution and the domain distribution.

While some studies have shown the benefit of continued pretraining on domain-specific unlabeled data (e.g., , these studies only consider a single domain at a time and use a language model that is pretrained on a smaller and less diverse corpus than the most recent language models. Moreover, it is not known how the benefit of continued pretraining may vary with factors like the amount of available labeled task data, or the proximity of the target domain to the original pretraining corpus (see Figure 1) .

We address this question for one such highperforming model, ROBERTA (Liu et al., 2019) ( §2) . We consider four domains (biomedical and computer science publications, news, and reviews; §3) and eight classification tasks (two in each domain). For targets that are not already in-domain for ROBERTA, our experiments show that contin-ued pretraining on the domain (which we refer to as domain-adaptive pretraining or DAPT) consistently improves performance on tasks from the target domain, in both high-and low-resource settings.

Above, we consider domains defined around genres and forums, but it is also possible to induce a domain from a given corpus used for a task, such as the one used in supervised training of a model. This raises the question of whether pretraining on a corpus more directly tied to the task can further improve performance. We study how domainadaptive pretraining compares to task-adaptive pretraining, or TAPT, on a smaller but directly taskrelevant corpus: the unlabeled task dataset ( §4), drawn from the task distribution. Task-adaptive pretraining has been shown effective (Howard and Ruder, 2018) , but is not typically used with the most recent models. We find that TAPT provides a large performance boost for ROBERTA, with or without domain-adaptive pretraining.

Finally, we show that the benefits from taskadaptive pretraining increase when we have additional unlabeled data from the task distribution that has been manually curated by task designers or annotators. Inspired by this success, we propose ways to automatically select additional task-relevant unlabeled text, and show how this improves performance in certain low-resource cases ( §5). On all tasks, our results using adaptive pretraining techniques are competitive with the state of the art.

In summary, our contributions include:

• a thorough analysis of domain-and taskadaptive pretraining across four domains and eight tasks, spanning low-and high-resource settings; • an investigation into the transferability of adapted LMs across domains and tasks; and • a study highlighting the importance of pretraining on human-curated datasets, and a simple data selection strategy to automatically approach this performance. Our code as well as pretrained models for multiple domains and tasks are publicly available. 1

2 Background: Pretraining

Learning for most NLP research systems since 2018 consists of training in two stages. First, a neural language model (LM), often with millions of parameters, is trained on large unlabeled cor-pora. The word (or wordpiece; Wu et al. 2016) representations learned in the pretrained model are then reused in supervised training for a downstream task, with optional updates (fine-tuning) of the representations and network from the first stage.

One such pretrained LM is ROBERTA , which uses the same transformerbased architecture (Vaswani et al., 2017) as its predecessor, BERT . It is trained with a masked language modeling objective (i.e., cross-entropy loss on predicting randomly masked tokens). The unlabeled pretraining corpus for ROBERTA contains over 160 GB of uncompressed raw text from different English-language corpora (see Appendix §A.1). ROBERTA attains better performance on an assortment of tasks than its predecessors, making it our baseline of choice.

Although ROBERTA's pretraining corpus is derived from multiple sources, it has not yet been established if these sources are diverse enough to generalize to most of the variation in the English language. In other words, we would like to understand what is out of ROBERTA's domain. Towards this end, we explore further adaptation by continued pretraining of this large LM into two categories of unlabeled data: (i) large corpora of domain-specific text ( §3), and (ii) available unlabeled data associated with a given task ( §4).

3 Domain-Adaptive Pretraining

Our approach to domain-adaptive pretraining (DAPT) is straightforward-we continue pretraining ROBERTA on a large corpus of unlabeled domain-specific text. The four domains we focus on are biomedical (BIOMED) papers, computer science (CS) papers, newstext from REALNEWS, and AMAZON reviews. We choose these domains because they have been popular in previous work, and datasets for text classification are available in each. Table 1 lists the specifics of the unlabeled datasets in all four domains, as well as ROBERTA's training corpus. 1

Table 1: List of the domain-specific unlabeled datasets. In columns 5 and 6, we report ROBERTA’s masked LM loss on 50K randomly sampled held-out documents from each domain before (LROB.) and after (LDAPT) DAPT (lower implies a better fit on the sample). ‡ indicates that the masked LM loss is estimated on data sampled from sources similar to ROBERTA’s pretraining corpus.

3.1 Analyzing Domain Similarity

Before performing DAPT, we attempt to quantify the similarity of the target domain to ROBERTA's pretraining domain. We consider domain vocabularies containing the top 10K most frequent unigrams (excluding stopwords) in comparably sized random samples of held-out documents in each domain's corpus. We use 50K held-out documents for each domain other than REVIEWS, and 150K held-out documents in REVIEWS, since they are much shorter. We also sample 50K documents from sources similar to ROBERTA's pretraining corpus (i.e., BOOKCORPUS, STORIES, WIKIPEDIA, and REALNEWS) to construct the pretraining domain vocabulary, since the original pretraining corpus is not released. Figure 2 shows the vocabulary overlap across these samples. We observe that ROBERTA's pretraining domain has strong vocabulary overlap with NEWS and REVIEWS, while CS and BIOMED are far more dissimilar to the other domains. This simple analysis suggests the degree of benefit to be expected by adaptation of ROBERTA to different domains-the more dissimilar the domain, the higher the potential for DAPT.

Figure 2: Vocabulary overlap (%) between domains. PT denotes a sample from sources similar to ROBERTA’s pretraining corpus. Vocabularies for each domain are created by considering the top 10K most frequent words (excluding stopwords) in documents sampled from each domain.

3.2 Experiments

Our LM adaptation follows the settings prescribed for training ROBERTA. We train ROBERTA on each domain for 12.5K steps, which amounts to single pass on each domain dataset, on a v3-8 TPU; see other details in Appendix B. This second phase of pretraining results in four domain-adapted LMs, one for each domain. We present the masked LM loss of ROBERTA on each domain before and after DAPT in Table 1 . We observe that masked LM loss decreases in all domains except NEWS after DAPT, where we observe a marginal increase. We discuss cross-domain masked LM loss in Appendix §E.

Under each domain, we consider two text classification tasks, as shown in Table 2 . Our tasks represent both high-and low-resource (≤ 5K labeled training examples, and no additional unlabeled data) settings. For HYPERPARTISAN, we use the data splits from Beltagy et al. (2020) . For RCT, we represent all sentences in one long sequence for simultaneous prediction.

Table 2: Specifications of the various target task datasets. † indicates high-resource settings. Sources: CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), IMDB (Maas et al., 2011).

Baseline As our baseline, we use an off-the-shelf ROBERTA-base model and perform supervised fine-tuning of its parameters for each classification task. On average, ROBERTA is not drastically behind the state of the art (details in Appendix §A.2), and serves as a good baseline since it provides a single LM to adapt to different domains.

Table 3: Comparison of ROBERTA (ROBA.) and DAPT to adaptation to an irrelevant domain (¬ DAPT). Reported results are test macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. Best task performance is boldfaced. See §3.3 for our choice of irrelevant domains.
Table 4: Examples that illustrate how some domains might have overlaps with others, leading to unexpected positive transfer. We highlight expressions in the reviews that are also found in the REALNEWS articles.
Table 5: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results follow the same format as Table 3. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2.
Table 6: Though TAPT is effective (Table 5), it is harmful when applied across tasks. These findings illustrate differences in task distributions within a domain.
Table 7: Mean test set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT-500), with CuratedTAPT across five random seeds, with standard deviations as subscripts. † indicates high-resource settings.
Table 8: Mean test set micro-F1 (for CHEMPROT and RCT) and macro-F1 (for ACL-ARC), across five random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNNTAPT selection. Neighbors of the task data are selected from the domain data.
Table 9: Computational requirements for adapting to the RCT-500 task, comparing DAPT (§3) and the various TAPT modifications described in §4 and §5.
Table 10: Summary of strategies for multi-phase pretraining explored in this paper.
Table 11: Overview of prior work across strategies for continued pre-training summarized in Table 10. ULMFIT is pretrained on English Wikipedia; ULMFIT† on English tweets; ELMO on the 1BWORDBENCHMARK (newswire; Chelba et al., 2014); GPT on BOOKCORPUS; BERT on English Wikipedia and BOOKCORPUS. In comparison to these pretraining corpora, ROBERTA’s pretraining corpus is substantially more diverse (see Appendix §A.1).
Table 12: ROBERTA’s (row 1) and domain-adapted ROBERTA’s (rows 2–5) masked LM loss on randomly sampled held-out documents from each domain (lower implies a better fit). PT denotes a sample from sources similar to ROBERTA’s pretraining corpus. The lowest masked LM for each domain sample is boldfaced.
Table 13: Hyperparameters for domain- and task- adaptive pretraining.

Classification Architecture Following standard practice we pass the final layer [CLS] token representation to a task-specific feedforward layer for prediction (see Table 14 in Appendix for more hyperparameter details).

Table 14: Hyperparameters for ROBERTA text classifier.

Results Test results are shown under the DAPT column of ments over ROBERTA, demonstrating the benefit of DAPT when the target domain is more distant from ROBERTA's source domain. The pattern is consistent across high-and low-resource settings. Although DAPT does not increase performance on AGNEWS, the benefit we observe in HYPERPAR-TISAN suggests that DAPT may be useful even for tasks that align more closely with ROBERTA's source domain.

3.3 Domain Relevance For Dapt

Additionally, we compare DAPT against a setting where for each task, we adapt the LM to a domain outside the domain of interest. This controls for the case in which the improvements over ROBERTA might be attributed simply to exposure to more data, regardless of the domain. In this setting, for NEWS, we use a CS LM; for REVIEWS, a BIOMED LM; for CS, a NEWS LM; for BIOMED, a REVIEWS LM. We use the vocabulary overlap statistics in Figure 2 to guide these choices.

Our results are shown in Table 3 , where the last column (¬DAPT) corresponds to this setting. For each task, DAPT significantly outperforms adapting to an irrelevant domain, suggesting the importance of pretraining on domain-relevant data. Furthermore, we generally observe that ¬DAPT results in worse performance than even ROBERTA on end-tasks. Taken together, these results indicate that in most settings, exposure to more data without considering domain relevance is detrimental to end-task performance. However, there are two tasks (SCIERC and ACL-ARC) in which ¬DAPT marginally improves performance over ROBERTA. This may suggest that in some cases, continued pretraining on any additional data is useful, as noted in Baevski et al. (2019) .

3.4 Domain Overlap

Our analysis of DAPT is based on prior intuitions about how task data is assigned to specific domains. For instance, to perform DAPT for HELPFULNESS, we only adapt to AMAZON reviews, but not to any REALNEWS articles. However, the gradations in Figure 2 suggest that the boundaries between domains are in some sense fuzzy; for example, 40% of unigrams are shared between REVIEWS and NEWS. As further indication of this overlap, we also qualitatively identify documents that overlap cross-domain: in Table 4 , we showcase reviews and REALNEWS articles that are similar to these reviews (other examples can be found in Appendix §D). In fact, we find that adapting ROBERTA to Although this analysis is by no means comprehensive, it indicates that the factors that give rise to observable domain differences are likely not mutually exclusive. It is possible that pretraining beyond conventional domain boundaries could result in more effective DAPT; we leave this investigation to future work. In general, the provenance of data, including the processes by which corpora are curated, must be kept in mind when designing pretraining procedures and creating new benchmarks that test out-of-domain generalization abilities.

4 Task-Adaptive Pretraining

Datasets curated to capture specific tasks of interest tend to cover only a subset of the text available within the broader domain. For example, the CHEMPROT dataset for extracting relations between chemicals and proteins focuses on abstracts of recently-published, high-impact articles from hand-selected PubMed categories (Krallinger et al., 2017 (Krallinger et al., , 2015 . We hypothesize that such cases where the task data is a narrowly-defined subset of the broader domain, pretraining on the task dataset itself or data relevant to the task may be helpful.

Task-adaptive pretraining (TAPT) refers to pretraining on the unlabeled training set for a given task; prior work has shown its effectiveness (e.g. Howard and Ruder, 2018) . Compared to domainadaptive pretraining (DAPT; §3), the task-adaptive approach strikes a different trade-off: it uses a far smaller pretraining corpus, but one that is much more task-relevant (under the assumption that the training set represents aspects of the task well). This makes TAPT much less expensive to run than DAPT, and as we show in our experiments, the performance of TAPT is often competitive with that of DAPT.

4.1 Experiments

Similar to DAPT, task-adaptive pretraining consists of a second phase of pretraining ROBERTA, but only on the available task-specific training data. In contrast to DAPT, which we train for 12.5K steps, we perform TAPT for 100 epochs. We artificially augment each dataset by randomly masking different words (using the masking probability of 0.15) across epochs. As in our DAPT experiments, we pass the final layer [CLS] token representation to a task-specific feedforward layer for classification (see Table 14 in Appendix for more hyperparameter details).

Our results are shown in the TAPT column of Table 5. TAPT consistently improves the ROBERTA baseline for all tasks across domains. Even on the news domain, which was part of ROBERTA pretraining corpus, TAPT improves over ROBERTA, showcasing the advantage of task adaptation. Particularly remarkable are the relative differences between TAPT and DAPT. DAPT is more resource intensive (see Table 9 in §5.3), but TAPT manages to match its performance in some of the tasks, such as SCIERC. In RCT, HYPERPARTISAN, AGNEWS, HELPFULNESS, and IMDB, the results even exceed those of DAPT, highlighting the efficacy of this cheaper adaptation technique. (Table 5) , it is harmful when applied across tasks. These findings illustrate differences in task distributions within a domain.

Combined DAPT and TAPT We investigate the effect of using both adaptation techniques together. We begin with ROBERTA and apply DAPT then TAPT under this setting. The three phases of pretraining add up to make this the most computationally expensive of all our settings (see Table 9 ). As expected, combined domain-and task-adaptive pretraining achieves the best performance on all tasks (Table 5 ). 2

Overall, our results show that DAPT followed by TAPT achieves the best of both worlds of domain and task awareness, yielding the best performance. While we speculate that TAPT followed by DAPT would be susceptible to catastrophic forgetting of the task-relevant corpus (Yogatama et al., 2019) , alternate methods of combining the procedures may result in better downstream performance. Future work may explore pretraining with a more sophisticated curriculum of domain and task distributions.

Cross-Task Transfer

We complete the comparison between DAPT and TAPT by exploring whether adapting to one task transfers to other tasks in the same domain. For instance, we further pretrain the LM using the RCT unlabeled data, fine-tune it with the CHEMPROT labeled data, and observe the effect. We refer to this setting as Transfer-TAPT. Our results for tasks in all four domains are shown in Table 6 . We see that TAPT optimizes for single task performance, to the detriment of cross-task transfer. These results demonstrate that data distributions of tasks within a given domain might differ. Further, this could also explain why adapting only to a broad domain is not sufficient, and why TAPT after DAPT is effective.

5 Augmenting Training Data For Task-Adaptive Pretraining

In §4, we continued pretraining the LM for task adaptation using only the training data for a supervised task. Inspired by the success of TAPT, we next investigate another setting where a larger pool of unlabeled data from the task distribution exists, typically curated by humans. We explore two scenarios. First, for three tasks (RCT, HYPERPARTISAN, and IMDB) we use this larger pool of unlabeled data from an available human-curated corpus ( §5.1). Next, we explore retrieving related unlabeled data for TAPT, from a large unlabeled in-domain corpus, for tasks where extra human-curated data is unavailable ( §5.2).

5.1 Human Curated-Tapt

Dataset creation often involves collection of a large unlabeled corpus from known sources. This corpus is then downsampled to collect annotations, based on the annotation budget. The larger unlabeled corpus is thus expected to have a similar distribution to the task's training data. Moreover, it is usually available. We explore the role of such corpora in task-adaptive pretraining.

Data We simulate a low-resource setting RCT-500, by downsampling the training data of the RCT dataset to 500 examples (out of 180K available), and treat the rest of the training data as unlabeled. The HYPERPARTISAN shared task (Kiesel et al., 2019) has two tracks: low-and high-resource. We use 5K documents from the high-resource setting as Curated-TAPT unlabeled data and the original lowresource training documents for task fine-tuning. For IMDB, we use the extra unlabeled data manually curated by task annotators, drawn from the same distribution as the labeled data (Maas et al., 2011) .

Results

We compare Curated-TAPT to TAPT and DAPT + TAPT in Table 7 . Curated-TAPT further improves our prior results from §4 across all three datasets. Applying Curated-TAPT after adapting to the domain results in the largest boost in performance on all tasks; in HYPERPARTISAN, DAPT + Curated-TAPT is within standard deviation of Curated-TAPT. Moreover, curated-TAPT achieves Figure 3 : An illustration of automated data selection ( §5.2). We map unlabeled CHEMPROT and 1M BIOMED sentences to a shared vector space using the VAMPIRE model trained on these sentences. Then, for each CHEMPROT sentence, we identify k nearest neighbors, from the BIOMED domain. 95% of the performance of DAPT + TAPT with the fully labeled RCT corpus (Table 5 ) with only 0.3% of the labeled data. These results suggest that curating large amounts of data from the task distribution is extremely beneficial to end-task performance. We recommend that task designers release a large pool of unlabeled task data for their tasks to aid model adaptation through pretraining.

Figure 3: An illustration of automated data selection (§5.2). We map unlabeled CHEMPROT and 1M BIOMED sentences to a shared vector space using the VAMPIRE model trained on these sentences. Then, for each CHEMPROT sentence, we identify k nearest neighbors, from the BIOMED domain.

5.2 Automated Data Selection For Tapt

Consider a low-resource scenario without access to large amounts of unlabeled data to adequately benefit from TAPT, as well as absence of computational resources necessary for DAPT (see Table 9 for details of computational requirements for different pretraining phases). We propose simple unsuper-vised methods to retrieve unlabeled text that aligns with the task distribution, from a large in-domain corpus. Our approach finds task-relevant data from the domain by embedding text from both the task and domain in a shared space, then selects candidates from the domain based on queries using the task data. Importantly, the embedding method must be lightweight enough to embed possibly millions of sentences in a reasonable time. Given these constraints, we employ VAMPIRE ; Figure 3) , a lightweight bag-of-words language model. We pretrain VAM-PIRE on a large deduplicated 3 sample of the domain (1M sentences) to obtain embeddings of the text from both the task and domain sample. We then select k candidates of each task sentence from the domain sample, in embeddings space. Candidates are selected (i) via nearest neighbors selection (kNN-TAPT) 4 , or (ii) randomly (RAND-TAPT). We continue pretraining ROBERTA on this augmented corpus with both the task data (as in TAPT) as well as the selected candidate pool.

Results Results in Table 8 show that kNN-TAPT outperforms TAPT for all cases. RAND-TAPT is generally worse than kNN-TAPT, but within a standard deviation arising from 5 seeds for RCT and ACL-ARC. As we increase k, kNN-TAPT performance steadily increases, and approaches that of DAPT. Appendix F shows examples of nearest neighbors of task data. Future work might consider a closer study of kNN-TAPT, more sophisticated data selection methods, and the tradeoff between the diversity and task relevance of selected examples.

5.3 Computational Requirements

The computational requirements for all our adaptation techniques on RCT-500 in the BIOMED domain in Table 9 . TAPT is nearly 60 times faster to train than DAPT on a single v3-8 TPU and storage requirements for DAPT on this task are 5.8M times that of TAPT. Our best setting of DAPT + TAPT amounts to three phases of pretraining, and at first glance appears to be very expensive. However, once the LM has been adapted to a broad domain, it can be reused for multiple tasks within that domain, with only a single additional TAPT phase per task. While Curated-TAPT tends to achieve the best cost-

Pretraining

Steps Docs. Table 9 : Computational requirements for adapting to the RCT-500 task, comparing DAPT ( §3) and the various TAPT modifications described in §4 and §5.

benefit ratio in this comparison, one must also take into account the cost of curating large in-domain data. Automatic methods such as kNN-TAPT are much cheaper than DAPT.

6 Related Work

Transfer learning for domain adaptation Prior work has shown the benefit of continued pretraining in domain (Alsentzer et al., 2019; Chakrabarty et al., 2019; . 5 We have contributed further investigation of the effects of a shift between a large, diverse pretraining corpus and target domain on task performance. Other studies (e.g., have trained language models (LMs) in their domain of interest, from scratch. In contrast, our work explores multiple domains, and is arguably more cost effective, since we continue pretraining an already powerful LM.

Task-adaptive pretraining Continued pretraining of a LM on the unlabeled data of a given task (TAPT) has been show to be beneficial for endtask performance (e.g. in Howard and Ruder, 2018; Phang et al., 2018; Sun et al., 2019) . In the presence of domain shift between train and test data distributions of the same task, domain-adaptive pretraining (DAPT) is sometimes used to describe what we term TAPT (Logeswaran et al., 2019; Han and Eisenstein, 2019) . Related approaches include language modeling as an auxiliary objective to task classifier fine-tuning (Chronopoulou et al., 2019; Radford et al., 2018) or consider simple syntactic structure of the input while adapting to task-specific data (Swayamdipta et al., 2019) . We compare DAPT and TAPT as well as their interplay with respect to dataset size for continued pretraining (hence, expense of more rounds of pretraining), relevance to a data sample of a given task, and transferability to other tasks and datasets. See Table 11 in Appendix §A for a summary of multi-phase pretraining strategies from related work.

Training Data Domain (Unlabeled) Task (Unlabeled) Task (Labeled) ROBERTA DAPT TAPT DAPT + TAPT kNN-TAPT (Subset) Curated-TAPT (Extra)

Data selection for transfer learning Selecting data for transfer learning has been explored in NLP (Moore and Lewis, 2010; Ruder and Plank, 2017; Zhang et al., 2019, among others) . focus on identifying the most suitable corpus to pretrain a LM from scratch, for a single task: NER, whereas we select relevant examples for various tasks in §5.2. Concurrent to our work, Aharoni and Goldberg (2020) propose data selection methods for NMT based on cosine similarity in embedding space, using DISTILBERT (Sanh et al., 2019) for efficiency. In contrast, we use VAMPIRE, and focus on augmenting TAPT data for text classification tasks. Khandelwal et al. (2020) introduced kNN-LMs that allows easy domain adaptation of pretrained LMs by simply adding a datastore per domain and no further training; an alternative to integrate domain information in an LM. Our study of human-curated data §5.1 is related to focused crawling (Chakrabarti et al., 1999) for collection of suitable data, especially with LM reliance (Remus and Biemann, 2016) .

What is a domain? Despite the popularity of domain adaptation techniques, most research and practice seems to use an intuitive understanding of domains. A small body of work has attempted to address this question (Lee, 2001; Eisenstein et al., 2014; van der Wees et al., 2015; Plank, 2016; Ruder et al., 2016, among others) . For instance, Aharoni and Goldberg (2020) define domains by implicit clusters of sentence representations in pretrained LMs. Our results show that DAPT and TAPT complement each other, which suggests a spectra of domains defined around tasks at various levels of granularity (e.g., Amazon reviews for a specific product, all Amazon reviews, all reviews on the web, the web).

7 Conclusion

We investigate several variations for adapting pretrained LMs to domains and tasks within those domains, summarized in Table 10 . Our experiments reveal that even a model of hundreds of millions of parameters struggles to encode the complexity of a single textual domain, let alone all of language. We show that pretraining the model towards a specific task or small corpus can provide significant benefits. Our findings suggest it may be valuable to complement work on ever-larger LMs with parallel efforts to identify and use domain-and taskrelevant corpora to specialize models. While our results demonstrate how these approaches can improve ROBERTA, a powerful LM, the approaches we studied are general enough to be applied to any pretrained LM. Our work points to numerous future directions, such as better data selection for TAPT, efficient adaptation large pretrained language models to distant domains, and building reusable language models after adaptation.

A Related Work Table 11 shows which of the strategies for continued pretraining have already been explored in the prior work from the Related Work ( §6). As evident from the table, our work compares various strategies as well as their interplay using a pretrained language model trained on a much more heterogeneous pretraining corpus.

A.1 Roberta'S Pretraining Corpus

ROBERTA was trained on data from BOOKCOR-PUS (Zhu et al., 2015) , 6 WIKIPEDIA, 7 a portion of the CCNEWS dataset (Nagel, 2016) , 8 OPENWEB-TEXT corpus of Web content extracted from URLs shared on Reddit (Gokaslan and Cohen, 2019), 9 and a subset of CommonCrawl that it is said to resemble the "story-like" style of WINOGRAD schemas (STORIES; Trinh and Le, 2018). 10

A.2 State Of The Art

In this section, we specify the models achieving state of the art on our tasks. See the caption of 6 https://github.com/soskek/bookcorpus 7 https://github.com/google-research/ bert 8 https://github.com/fhamborg/ news-please 9 https://github.com/jcpeterson/ openwebtext 10 https://github.com/tensorflow/models/ tree/master/research/lm_commonsense Table 5 for the reported performance of these models. For ACL-ARC, that is SCIBERT , a BERT-base model for trained from scratch on scientific text. For CHEMPROT and SCI-ERC, that is S2ORC-BERT (Lo et al., 2020 ), a similar model to SCIBERT. For AGNEWS and IMDB, XLNet-large, a much larger model. For RCT, . For HYPERPARTISAN, LONGFORMER, a modified Transformer language model for long documents (Beltagy et al., 2020) . Thongtan and Phienthrakul (2019) report a higher number (97.42) on IMDB, but they train their word vectors on the test set. Our baseline establishes the first benchmark for the HELPFULNESS dataset.

B Experimental Setup

Preprocessing for DAPT The unlabeled corpus in each domain was pre-processed prior to language model training. Abstracts and body paragraphs from biomedical and computer science articles were used after sentence splitting using scispaCy (Neumann et al., 2019) . We used summaries and full text of each news article, and the entire body of review from Amazon reviews. For both news and reviews, we perform sentence splitting using spaCy (Honnibal and Montani, 2017) .

Training details for DAPT We train ROBERTA on each domain for 12.5K steps. We focused on matching all the domain dataset sizes (see Table 1 ) such that each domain is exposed to the same amount of data as for 12.5K steps it is trained for. AMAZON reviews contain more documents, but each is shorter. We used an effective batch size of 2048 through gradient accumulation, as recommended in . See Table 13 for more hyperparameter details.

Training details for TAPT We use the same pretraining hyperparameters as DAPT, but we artificially augmented each dataset for TAPT by randomly masking different tokens across epochs, using the masking probability of 0.15. Each dataset was trained for 100 epochs. For tasks with less than 5K examples, we used a batch size of 256 through gradient accumulation. See Table 13 for more hyperparameter details.

Optimization We used the Adam optimizer (Kingma and Ba, 2015), a linear learning rate scheduler with 6% warm-up, a maximum learning rate of 0.0005. When we used a batch size of 256, we used a maximum learning rate of 0.0001, as recommended in . We observe a high variance in performance between random seeds when fine-tuning ROBERTA to HYPERPARTISAN, because the dataset is extremely small. To produce final results on this task, we discard and resample degenerate seeds. We display the full hyperparameter settings in Table 13 .

Implementation Our LM implementation uses the HuggingFace transformers library (Wolf et al., 2019) 11 and PyTorch XLA for TPU compatibility. 12 Each adaptive pretraining exper-11 https://github.com/huggingface/ transformers 12 https://github.com/pytorch/xla iment was performed on a single v3-8 TPU from Google Cloud. 13 For the text classification tasks, we used AllenNLP (Gardner et al., 2018) . Following standard practice we pass the final layer [CLS] token representation to a task-specific feedforward layer for prediction.

C Development Set Results

Adhering to the standards suggested by Dodge et al.

Table 15: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results are development macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. Best task performance is boldfaced. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2.
Table 16: Development comparison of ROBERTA (ROBA.) and DAPT to adaptation to an irrelevant domain (¬ DAPT). See §3.3 for our choice of irrelevant domains. Reported results follow the same format as Table 5.
Table 17: Development results for TAPT transferability.
Table 18: Mean development set macro-F1 (for HYPERPARTISAN and IMDB) and micro-F1 (for RCT-500), with Curated-TAPT across five random seeds, with standard deviations as subscripts. † indicates high-resource settings.
Table 19: Mean development set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT), across five random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNN-TAPT selection. Neighbors of the task data are selected from the domain data.

D Analysis Of Domain Overlap

In Table 20 we display additional examples that highlight the overlap between IMDB reviews and REALNEWS articles, relevant for analysis in §3.1.

Table 20. Not extracted; please refer to original document.

E Analysis Of Cross-Domain Masked Lm Loss

In Section §3.2, we provide ROBERTA's masked LM loss before and after DAPT. We display crossdomain masked-LM loss in Table 12 , where we evaluate masked LM loss on text samples in other domains after performing DAPT. We observe that the cross-domain masked-LM loss mostly follows our intuition and insights from the paper, i.e. ROBERTA's pretraining corpus and NEWS are closer, and BIOMED to CS (relative to other domains). However, our analysis in §3.1 illustrates that REVIEWS and NEWS also have some similarities. This is supported with the loss of ROBERTA that is adapted to NEWS, calculated on a sample of REVIEWS. However, ROBERTA that is adapted to REVIEWS results in the highest loss for a NEWS sample. This is the case for all domains. One of the properties that distinguishes REVIEWS from all other domains is that its documents are significantly shorter. In general, we find that cross-DAPT masked-LM loss can in some cases be a noisy predictor of domain similarity.

F K-Nearest Neighbors Data Selection

In Table 21 , we display nearest neighbor documents in the BIOMED domain identified by our selection method, on the RCT dataset.

Table 21: 5 nearest neighbors of sentences from the RCT dataset (Source) in the BIOMED domain (Neighbors 0–4).

https://github.com/allenai/ dont-stop-pretraining

For BIOMED and CS, we used an internal version of S2ORC that contains papers that cannot be released due to copyright restrictions.

Results on HYPERPARTISAN match those of TAPT, within a standard deviation arising from the five seeds.

We deduplicated this set to limit computation, since different sentences can share neighbors.4 We use a flat search index with cosine similarity between embeddings with the FAISS(Johnson et al., 2019) library.

In contrast, find that the Jensen-Shannon divergence on term distributions between BERT's pretraining corpora and each MULTINLI domain(Williams et al., 2018) does not predict its performance, though this might be an isolated finding specific to the MultiNLI dataset.

http://github.com/allenai/ tpu-pretrain