Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

BERT for Coreference Resolution: Baselines and Analysis

Authors

Abstract

We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of model predictions indicates that, compared to ELMo and BERT-base, BERT-large is particularly better at distinguishing between related but distinct entities (e.g., President and CEO). However, there is still room for improvement in modeling document-level context, conversations, and mention paraphrasing. Our code and models are publicly available.

1 Introduction

Recent BERT-based models have reported dramatic gains on multiple semantic benchmarks including question-answering, natural language inference, and named entity recognition (Devlin et al., 2019) . Apart from better bidirectional reasoning, one of BERT's major improvements over previous methods (Peters et al., 2018; McCann et al., 2017) is passage-level training, 2 which allows it to better model longer sequences.

We fine-tune BERT to coreference resolution, achieving strong improvements on the GAP (Webster et al., 2018) and OntoNotes (Pradhan et al., 2012) benchmarks. We present two ways of extending the c2f-coref model in . The independent variant uses non-overlapping segments each of which acts as an independent instance for BERT. The overlap variant splits the document into overlapping segments so as to provide the model with context beyond 512 tokens. BERT-large improves over ELMo-based c2f-coref 3.9% on OntoNotes and 11.5% on GAP (both absolute).

Table 1: OntoNotes: BERT improves the c2f-coref model on English by 0.9% and 3.9% respectively for base and large variants. The main evaluation is the average F1 of three metrics – MUC, B3, and CEAFφ4 on the test set.
Table 2: GAP: BERT improves the c2f-coref model by 11.5%. The metrics are F1 score on Masculine and Feminine examples, Overall, and a Bias factor (F / M).

A qualitative analysis of BERT and ELMobased models (Table 3) suggests that BERT-large (unlike BERT-base) is remarkably better at distinguishing between related yet distinct entities or concepts (e.g., Repulse Bay and Victoria Harbor). However, both models often struggle to resolve coreferences for cases that require world knowledge (e.g., the developing story and the scandal). Likewise, modeling pronouns remains difficult, especially in conversations.

Table 3: Qualitative Analysis: #base and #large refers to the number of cluster-level errors on a subset of the OntoNotes English development set. Underlined and bold-faced mentions respectively indicate incorrect and

We also find that BERT-large benefits from using longer context windows (384 word pieces) while BERT-base performs better with shorter contexts (128 word pieces). Yet, both variants perform much worse with longer context windows (512 tokens) in spite of being trained on 512-size contexts. Moreover, the overlap variant, which artificially extends the context window beyond 512 tokens provides no improvement. This indicates that using larger context windows for pretraining might not translate into effective long-range features for a downstream task. Larger models also exacerbate the memory-intensive nature of span representations (Lee et al., 2017) , which have driven recent improvements in coreference resolution. Together, these observations suggest that there is still room for improvement in modeling document-level context, conversations, and mention paraphrasing.

2 Method

For our experiments, we use the higher-order coreference model in which is the current state of the art for the English OntoNotes dataset (Pradhan et al., 2012) . We refer to this as c2f-coref in the paper.

2.1 Overview Of C2F-Coref

For each mention span x, the model learns a distribution P (•) over possible antecedent spans Y :

P (y) = e s(x,y) y ′ ∈Y e s(x,y ′ ) (1)

The scoring function s(x, y) between spans x and y uses fixed-length span representations, g x and g y to represent its inputs. These consist of a concatenation of three vectors: the two LSTM states of the span endpoints and an attention vector computed over the span tokens. It computes the score s(x, y) by the mention score of x (i.e. how likely is the span x to be a mention), the mention score of y, and the joint compatibility score of x and y (i.e. assuming they are both mentions, how likely are x and y to refer to the same entity). The components are computed as follows:

EQUATION (4): Not extracted; please refer to original document.

where FFNN(•) represents a feedforward neural network and φ(x, y) represents speaker and metadata features. These span representations are later refined using antecedent distribution from a spanranking architecture as an attention mechanism.

2.2 Applying Bert

We replace the entire LSTM-based encoder (with ELMo and GloVe embeddings as input) in c2fcoref with the BERT transformer. We treat the first and last word-pieces (concatenated with the attended version of all word pieces in the span) as span representations. Documents are split into segments of max segment len, which we treat as a hyperparameter. We experiment with two variants of splitting:

Independent The independent variant uses nonoverlapping segments each of which acts as an independent instance for BERT. The representation for each token is limited to the set of words that lie in its segment. As BERT is trained on sequences of at most 512 word pieces, this variant has limited encoding capacity especially for tokens that lie at the start or end of their segments.

Overlap The overlap variant splits the document into overlapping segments by creating a Tsized segment after every T /2 tokens. These segments are then passed on to the BERT encoder independently, and the final token representation is derived by element-wise interpolation of representations from both overlapping segments.

Let r 1 ∈ R d and r 2 ∈ R d be the token representations from the overlapping BERT segments. The final representation r ∈ R d is given by:

f = σ(w T [r 1 ; r 2 ]) (5) r = f • r 1 + (1 − f ) • r 2 (6)

where w ∈ R 2d×d is a trained parameter and [; ] represents concatenation. This variant allows the model to artificially increase the context window beyond the max segment len hyperparameter.

All layers in both model variants are then finetuned following Devlin et al. (2019) .

3 Experiments

We evaluate our BERT-based models on two benchmarks: the paragraph-level GAP dataset (Webster et al., 2018) , and the documentlevel English OntoNotes 5.0 dataset (Pradhan et al., 2012) . OntoNotes examples are considerably longer and typically require multiple segments to read the entire document.

Implementation And Hyperparameters

We extend the original Tensorflow implementations of c2f-coref 3 and BERT. 4 We fine tune all models on the OntoNotes English data for 20 epochs using a dropout of 0.3, and learning rates of 1 × 10 −5 and 2 × 10 −4 with linear decay for the BERT parameters and the task parameters respectively. We found that this made a sizable impact of 2-3% over using the same learning rate for all parameters.

We trained separate models with max segment len of 128, 256, 384, and 512; the models trained on 128 and 384 word pieces performed the best for BERT-base and BERT-large respectively. As span representations are memory intensive, we truncate documents randomly to eleven segments for BERT-base and 3 for BERT-large during training. Likewise, we use a batch size of 1 document following . While training the large model requires 32GB GPUs, all models can be tested on 16GB GPUs. We use the cased English variants in all our experiments.

Baselines We compare the c2f-coref + BERT system with two main baselines: (1) the original ELMo-based c2f-coref system , and (2) its predecessor, e2e-coref (Lee et al., 2017), which does not use contextualized representations. In addition to being more computationally efficient than e2e-coref, c2f-coref iteratively refines span representations using attention for higher-order reasoning.

3.1 Paragraph Level: Gap

GAP (Webster et al., 2018 ) is a human-labeled corpus of ambiguous pronoun-name pairs derived from Wikipedia snippets. Examples in the GAP dataset fit within a single BERT segment, thus eliminating the need for cross-segment inference. Following Webster et al. 2018, we trained our BERT-based c2f-coref model on OntoNotes. 5 The predicted clusters were scored against GAP examples according to the official evaluation script. Table 2 shows that BERT improves c2f-coref by 9% and 11.5% for the base and large models respectively. These results are in line with large gains reported for a variety of semantic tasks by BERTbased models (Devlin et al., 2019) .

3.2 Document Level: Ontonotes

OntoNotes (English) is a document-level dataset from the CoNLL-2012 shared task on coreference resolution. It consists of about one million words of newswire, magazine articles, broadcast news, broadcast conversations, web data and conversational speech data, and the New Testament. The main evaluation is the average F1 of three metrics -MUC, B 3 , and CEAF φ 4 on the test set according to the official CoNLL-2012 evaluation scripts. Table 1 shows that BERT-base offers an improvement of 0.9% over the ELMo-based c2fcoref model. Given how gains on coreference resolution have been hard to come by as evidenced by the table, this is still a considerable improvement. However, the magnitude of gains is relatively modest considering BERT's arguably better architecture and many more trainable parameters. This is in sharp contrast to how even the base variant of BERT has very substantially improved the state of the art in other tasks. BERT-large, however, improves c2f-coref by the much larger margin of 3.9%. We also observe that the overlap variant offers no improvement over independent.

Concurrent with our work, Kantor and Globerson (2019) , who use higher-order entity-level representations over "frozen" BERT features, also report large gains over c2f-coref. While their feature-based approach is more memory efficient, the fine-tuned model seems to yield better results. Also concurrent, SpanBERT (Joshi et al., 2019) , another self-supervised method, pretrains span representations achieving state of the art results (Avg. F1 79.6) with the independent variant.

4 Analysis

We performed a qualitative comparison of ELMo and BERT models (Table 3) on the OntoNotes English development set by manually assigning error categories (e.g., pronouns, mention paraphrasing) to incorrect predicted clusters. 6 Overall, we found 93 errors for BERT-base and 74 for BERT-large from the same 15 documents. 6 Each incorrect cluster can belong to multiple categories.

Strengths

We did not find salient qualitative differences between ELMo and BERT-base models, which is consistent with the quantitative results (Table 1) . BERT-large improves over BERT-base in a variety of ways including pronoun resolution and lexical matching (e.g., race track and track).

In particular, the BERT-large variant is better at distinguishing related, but distinct, entities. Table 3 shows several examples where the BERT-base variant merges distinct entities (like Ocean Theater and Marine Life Center) into a single cluster. BERT-large seems to be able to avoid such merging on a more regular basis.

Weaknesses An analysis of errors on the OntoNotes English development set suggests that better modeling of document-level context, conversations, and entity paraphrasing might further improve the state of the art. Longer documents in OntoNotes generally contain larger and more spread-out clusters. We focus on three observations -(a) Table 4 shows how models perform distinctly worse on longer documents, (b) both models are unable to use larger segments more effectively (Table 5 ) and perform worse when the max segment len of 450 and 512 are used, and, (c) using overlapping segments to provide additional context does not improve results (Table 1) . Recent work (Joshi et al., 2019) suggests that BERT's inability to use longer sequences effectively is likely a by-product pretraining on short sequences for a vast majority of updates.

Table 4: Performance on the English OntoNotes dev set generally drops as the document length increases. Spread is measured as the average number of tokens between the first and last mentions in a cluster.
Table 5: Performance on the English OntoNotes dev set with varying values for max segment len. Neither model is able to effectively exploit larger segments; they perform especially badly when maximum segment len of 512 is used.

Comparing preferred segment lengths for base and large variants of BERT indicates that larger models might better encode longer contexts. However, larger models also exacerbate the memoryintensive nature of span representations, 7 which have driven recent improvements in coreference resolution. These observations suggest that future research in pretraining methods should look at more effectively encoding document-level context using sparse representations (Child et al., 2019) .

Modeling pronouns especially in the context of conversations (Table 3) , continues to be difficult for all models, perhaps partly because c2f-coref does very little to model dialog structure of the document. Lastly, a considerable number of errors suggest that models are still unable to resolve cases requiring mention paraphrasing. For example, bridging the Royals with Prince Charles and his wife Camilla likely requires pretraining models to encode relations between entities, especially considering that such learning signal is rather sparse in the training set.

5 Related Work

Scoring span or mention pairs has perhaps been one of the most dominant paradigms in coreference resolution. The base coreference model used in this paper from belongs to this family of models (Ng and Cardie, 2002; Bengtson and Roth, 2008; Denis and Baldridge, 2008; Fernandes et al., 2012; Durrett and Klein, 2013; Wiseman et al., 2015; Clark and Manning, 2016; Lee et al., 2017) .

More recently, advances in coreference resolution and other NLP tasks have been driven by unsupervised contextualized representations (Peters et al., 2018; Devlin et al., 2019; McCann et al., 2017; Joshi et al., 2019; Liu et al., 2019b) . Of these, BERT (Devlin et al., 2019) notably uses pretraining on passage-level sequences (in conjunction with a bidirectional masked language modeling objective) to more effectively model long-range dependencies. SpanBERT (Joshi et al., 2019) focuses on pretraining span representations achieving current state of the art results on OntoNotes with the independent variant.

https://github.com/mandarjoshi90/coref 2 Each BERT training example consists of around 512 word pieces, while ELMo is trained on single sentences.

http://github.com/kentonl/e2e-coref/ 4 https://github.com/google-research/bert

This is motivated by the fact that GAP, with only 4,000 name-pronoun pairs in its dev set, is not intended for fullscale training.