Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Content-Based Citation Recommendation

Authors

Abstract

We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors self-citations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method, (URL: http://bit.ly/citeDemo) and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.

1 Introduction

Due to the rapid growth of the scientific literature, conducting a comprehensive literature review has become challenging, despite major advances in digital libraries and information retrieval systems. Citation recommendation can help improve the quality and efficiency of this process by suggesting published scientific documents as likely citations for a query document, e.g., a paper draft to be submitted for ACL 2018. Existing citation recommendation systems rely on various information of the query documents such as author names and publication venue (Ren et al., 2014; Yu et al., 2012) , or a partial list of citations provided by the author (McNee et al., 2002; Jia and Saule, 2017) which may not be available, e.g., during the peer review process or in the early stage of a research project.

Our method uses a neural model to embed all available documents into a vector space by encoding the textual content of each document. We then select the nearest neighbors of a query document as candidates and rerank the candidates using a second model trained to discriminate between observed and unobserved citations. Unlike previous work, we can embed new documents in the same vector space used to identify candidate citations based on their text content, obviating the need to re-train the models to include new published papers. Further, unlike prior work (Yang et al., 2015; Ren et al., 2014) , our model is computationally efficient and scalable during both training and test time.

We assess the feasibility of recommending citations when some metadata for the query document is missing, and find that we are able to outperform the best reported results on two datasets while only using papers' textual content (i.e., title and abstract). While adding metadata helps further improve the performance of our method on standard metrics, we found that it introduces a bias for self-citation which might not be desirable in a citation recommendation system. See §5 for details of our experimental results.

Our main contributions are:

• a content-based method for citation recommendation which remains robust when metadata are missing for query documents, • large improvements over state of the art results on two citation recommendation datasets despite omitting the metadata, • a new dataset of seven million research papers, addressing some of the limitations in arXiv:1802.08301v1 [cs.CL] Figure 1 : An overview of our Citation Recommendation system. In Phase 1 (NNSelect), we project all documents in the corpus (7 in this toy example) in addition to the query document d q into a vector space, and use its (K=4) nearest neighbors: d 2 , d 6 , d 3 , and d 4 as candidates. We also add d 7 as a candidate because it was cited in d 3 . In Phase 2 (NNRank), we score each pair

Figure 1: An overview of our Citation Recommendation system. In Phase 1 (NNSelect), we project all documents in the corpus (7 in this toy example) in addition to the query document dq into a vector space, and use its (K=4) nearest neighbors: d2, d6, d3, and d4 as candidates. We also add d7 as a candidate because it was cited in d3. In Phase 2 (NNRank), we score each pair (dq, d2), (dq, d6), (dq, d3), (dq, d4), and (dq, d7) separately to rerank the candidates and return the top 3 candidates: d7, d6 and d2.

(d q , d 2 ), (d q , d 6 ), (d q , d 3 ), (d q , d 4 ), and (d q , d 7 )

separately to rerank the candidates and return the top 3 candidates:

d 7 , d 6 and d 2 .

previous datasets used for citation recommendation, and • a scalable web-based literature review tool based on this work. 2

2 Overview

We formulate citation recommendation as a ranking problem. Given a query document d q and a large corpus of published documents, the task is to rank documents which should be referenced in d q higher than other documents. Following previous work on citation recommendation, we use standard metrics (precision, recall, F-measure and mean reciprocal rank) to evaluate our predictions against gold references provided by the authors of query documents. Since the number of published documents in the corpus can be large, it is computationally expensive to score each document as a candidate reference with respect to d q . Instead, we recommend citations in two phases: (i) a fast, recall-oriented candidate selection phase, and (ii) a feature rich, 2 https://github.com/allenai/citeomatic precision-oriented reranking phase. Figure 1 provides an overview of the two phases using a toy example.

Phase 1 -Candidate Selection: In this phase, our goal is to identify a set of candidate references for d q for further analysis without explicitly iterating over all documents in the corpus. 3 Using a trained neural network, we first project all published documents into a vector space such that a document tends to be close to its references. Since the projection of a document is independent of the query document, the entire corpus needs to be embedded only once and can be reused for subsequent queries. Then, we project each query document d q to the same vector space and identify its nearest neighbors as candidate references. See §3 for more details about candidate selection.

Phase 2 -Reranking: Phase 1 yields a manageable number of candidates making it feasible to score each candidate d i by feeding the pair

(d q , d i )

into another neural network trained to discriminate between observed and unobserved citation pairs. The candidate documents are sorted by their estimated probability of being cited in d q , and top candidates are returned as recommended citations. See §4 for more details about the reranking model and inference in the candidate selection phase.

3 Phase 1: Candidate Selection (Nnselect)

In this phase, we select a pool of candidate citations for a given query document to be reranked in the next phase. First, we compute a dense embedding of the query document d q using the document embedding model (described next), and select K nearest neighbor documents in the vector space as candidates. 4 Following Strohman et al. (2007) , we also include the outgoing citations of the K nearest neighbors as candidates. The output of this phase is a list of candidate documents d i and their corresponding scores NNSelect(d q , d i ), defined as the cosine similarity between d q and d i in the document embedding space. Document embedding model. We use a supervised neural model to project any document d to a dense embedding based on its textual content. We use a bag-of-word representation of each textual field, e.g., d[title] = {'content-based', 'citation', 'recommendation'}, and compute the feature vector:

EQUATION (1): Not extracted; please refer to original document.

where w dir t is a dense direction embedding and w mag t is a scalar magnitude for word type t. 5 We then normalize the representation of each field and compute a weighted average of fields to get the document embedding, e d . In our experiments, we use the title and abstract fields of a document d:

e d = λ title f d[title] f d[title] 2 + λ abstract f d[abstract] f d[abstract] 2 ,

where λ title and λ abstract are scalar model parameters.

Training. We learn the parameters of the document embedding model (i.e., λ * , w

mag * , w dir * ) us- ing a training set T of triplets d q , d + , d − where d q is a query document, d + is a document cited in d q , and d − is a document not cited in d q .

The model is trained to predict a high cosine similarity for the pair (d q , d + ) and a low cosine similarity for the pair (d q , d − ) using the per-instance triplet loss :

loss = max α + s(d q , d − ) − s(d q , d + ), 0 , (2) where s(d i , d j )

is defined as the cosine similarity between document embeddings cos-sim(e d i , e d j ). We tune the margin α as a hyperparameter of the model. Next, we describe how negative examples are selected. cos-sim cos-sim cos-sim cos-sim Figure 2 : NNRank architecture. For each of the textual and categorical fields, we compute the cosine similarity between the embedding for d q and the corresponding embedding for d i . Then, we concatenate the cosine similarity scores, the numeric features and the summed weights of the intersection words, followed by two dense layers with ELU non-linearities The output layer is a dense layer with sigmoid non-linearity, which estimates the probability that

Figure 2: NNRank architecture. For each of the textual and categorical fields, we compute the cosine similarity between the embedding for dq and the corresponding embedding for di. Then, we concatenate the cosine similarity scores, the numeric features and the summed weights of the intersection words, followed by two dense layers with ELU non-linearities The output layer is a dense layer with sigmoid non-linearity, which estimates the probability that dq cites di.

d q cites d i . feature vectors f d[field]

as defined in Eq. 1 for the following fields: title, abstract, authors, venue and keyphrases (if available). For the title and abstract, we identify the subset of word types which appear in both documents (intersection), and compute the sum of their scalar weights as an additional feature, e.g., t∈∩title w ∩ t . We also use log number of times the candidate document d i has been cited in the corpus, i.e., log(d i [in-citations] ). Finally, we use the cosine similarity between d q and d i in the embedding space, i.e., cos-sim(e dq , e d i ).

Model

architecture. We illustrate the NNRank model architecture in Figure 2 . The output layer is defined as:

EQUATION (3): Not extracted; please refer to original document.

h = g title ; g abstract ; g authors ; g venue ;

g keyphrases ; cos-sim(e dq , e d i ); t∈∩ title w ∩ t ; t∈∩ abstract w ∩ t ; d i [in-citations] , g field = cos-sim(f dq[field] , f d i [field] ),

where 'FeedForward' is a three layer feed-forward neural network with two exponential linear unit layers (Clevert et al., 2015) and one sigmoid layer.

Training. The parameters of the NNRank model are w mag * , w dir * , w ∩ * and parameters of the three dense layers in 'FeedForward'. We reuse the triplet loss in Eq. 2 to learn these parameters, but redefine the similarity function s

(d i , d j ) as the sig- moid output described in Eq. 3.

At test time, we use this model to recommend candidates d i with the highest s

(d q , d i ) scores.

5 Experiments

In this section, we describe experimental results of our citation recommendation method and compare it to previous work.

Datasets. We use the DBLP and PubMed datasets (Ren et al., 2014) to compare with previous work on citation recommendation. The DBLP dataset contains over 50K scientific articles in the computer science domain, with an average of 5 citations per article. The PubMed dataset contains over 45K scientific articles in the medical domains, with an average of 17 citations per article. In both datasets, a document is accompanied by a title, an abstract, a venue, authors, citations and keyphrases. We replicate the experimental setup of Ren et al. (2014) by excluding papers with fewer than 10 citations and using the standard train, dev and test splits.

We also introduce OpenCorpus, 7 a new dataset of 7 million scientific articles primarily drawn from the computer science and neuroscience domain. Due to licensing constraints, doc-uments in the corpus do not include the full text of the scientific articles, but include the title, abstract, year, author, venue, keyphrases and citation information. The mutually exclusive training, development, and test splits were selected such that no document in the development or test set has a publication year less than that of any document in the training set. Papers with zero citations were removed from the development and test sets. We describe the key characteristics of OpenCorpus in Table 1 . Baselines. We compare our method to two baseline methods for recommending citations: Clus-Cite and BM25. ClusCite (Ren et al., 2014) clusters nodes in a heterogeneous graph of terms, authors and venues in order to find related documents which should be cited. We use the ClusCite results as reported in Ren et al. (2014) , which compared it to several other citation recommendation methods and found that it obtains state of the art results on the PubMed and DBLP datasets. The BM25 results are based on our implementation of the popular ranking function Okapi BM25 used in many information retrieval systems. See Appendix §D for details of our BM25 implementation.

Table 1: Characteristics of the OpenCorpus.

Evaluation. We use Mean Reciprocal Rank (MRR) and F1@20 to report the main results in this section. In Appendix §F, we also report additional metrics (e.g., precision and recall at 20) which have been used in previous work. We compute F1@20 as the harmonic mean of the corpuslevel precision and recall at 20 (P@20 and R@20).

Following (Ren et al., 2014) , precision and recall at 20 are first computed for each query document then averaged over query documents in the test set to compute the corpus-level P@20 and R@20.

Configurations. To find candidates in NNSelect, we use the approximate nearest neighbor search algorithm Annoy 8 , which builds a binary-tree structure that enables searching for nearest neighbors in O(log n) time. To build this tree, points in a high-dimensional space are split by choosing random hyperplanes. We use 100 trees in our approximate nearest neighbors index, and retrieve documents using the cosine distance metric.

We use the hyperopt library 9 to optimize various hyperparameters of our method such as size of hidden layers, regularization strength and learning rate. To ensure reproducibility, we provide a detailed description of the parameters used in both NNSelect and NNRank models, our hyperparameter optimization method and parameter values chosen in Appendix §A.

Main results. Table 2 reports the F1@20 and MRR results for the two baselines and three variants of our method. Since the OpenCorpus dataset is much bigger, we were not able to train the ClusCite baseline for it. Totti et al. 2016have also found it difficult to scale up ClusCite to larger datasets. Where available, we report the mean ± standard deviation based on five trials.

Table 2: F1@20 and MRR results for two baselines and three variants of our method. BM25 results are based on our implementation of this baseline, while ClusCite results are based on the results reported in Ren et al. (2014). “NNSelect” ranks candidates using cosine similarity between the query and candidate documents in the embedding space (phase 1). “NNSelect + NNRank” uses the discriminative reranking model to rerank candidates (phase 2), without encoding any of the metadata features. “+ metadata” encodes the metadata features (i.e., keyphrases, venues and authors), achieving the best results on all datasets. Mean and standard deviations are reported based on five trials.

The first variant, labeled "NNSelect," only uses the candidate selection part of our method (i.e., phase 1) to rank candidates by their cosine similarity to the query document in the embedding space as illustrated in Fig. 1 . Although the document embedding space was designed to efficiently select candidates for further processing in phase 2, recommending citations directly based on the cosine distance in this space outperforms both baselines.

The second variant, labeled "NNSelect + NNRank," uses the discriminative model (i.e., phase 2) to rerank candidates selected by NNSelect, without encoding metadata (venues, authors, keyphrases). Both the first and second variants show that improved modeling of paper text can significantly outperform previous methods for citation recommendation, without using metadata.

The third variant, labeled "NNSelect + NNRank + metadata," further encodes the metadata features in the reranking model, and gives the best overall results. On both the DBLP and PubMed datasets, we obtain relative improvements over 20% (for F1@20) and 25% (for MRR) Table 2 : F1@20 and MRR results for two baselines and three variants of our method. BM25 results are based on our implementation of this baseline, while ClusCite results are based on the results reported in Ren et al. (2014) . "NNSelect" ranks candidates using cosine similarity between the query and candidate documents in the embedding space (phase 1). "NNSelect + NNRank" uses the discriminative reranking model to rerank candidates (phase 2), without encoding any of the metadata features. "+ metadata" encodes the metadata features (i.e., keyphrases, venues and authors), achieving the best results on all datasets. Mean and standard deviations are reported based on five trials.

compared to the best reported results of ClusCite.

In the rest of this section, we describe controlled experiments aimed at analyzing different aspects of our proposed method.

Choice of negative samples. As discussed in §3, we use different types of negative samples to train our models. We experimented with using only a subset of the types, while controlling for the total number of negative samples used, and found that using negative nearest neighbors while training the models is particularly important for the method to work. As illustrated in Table 3 , on the PubMed dataset, adding negative nearest neighbors while training the models improves the F1@20 score from 0.306 to 0.329, and improves the MRR score from 0.705 to 0.771. Intuitively, using nearest neighbor negative examples focuses training on the harder cases on which the model is more likely to make mistakes. F1@20 ∆ MRR ∆ Full model 0.329 0.771 without intersection 0.296 0.033 0.653 0.118 without -ve NNs 0.306 0.016 0.705 0.066 without numerical 0.314 0.008 0.735 0.036 Valuable features. We experimented with different subsets of the optional features used in NNRank in order to evaluate the contribution of various features. We found intersection features, NNSelect scores, and the number of incoming citations to be the most valuable feature. As illustrated in Table 3 , the intersection features improves the F1@20 score from 0.296 to 0.329, and the MRR score from 0.653 to 0.771, on the PubMed dataset. The numerical features (NNSelect score and incoming citations) improve the F1@20 score from 0.314 to 0.329, and improves the MRR score from 0.735 to 0.771. This shows that, in some applications, feeding engineered features to neural networks can be an effective strategy to improve their performance.

Table 3: Comparison of PubMed results of the full model with model without (i) intersection features, (ii) negative nearest neighbors in training samples, and (iii) numerical features.

Encoding textual features. We also experimented with using recurrent and convolutional neural network to encode the textual fields of query and candidate documents, instead of using a weighted sum as described in Eq. 1. We found that recurrent and convolutional encoders are much slower, and did not observe a significant improvement in the overall performance as measured by the F1@20 and MRR metrics. This result is consistent with previous studies on other tasks, e.g., Iyyer et al. (2015) .

Number of nearest neighbors. As discussed in §3, the candidate selection step is crucial for the scalability of our method because it reduces the number of computationally expensive pairwise comparisons with the query document at runtime. We did a controlled experiment on the OpenCorpus dataset (largest among the three datasets) to measure the effect of using different numbers of nearest neighbors, and found that both P@20 and R@20 metrics are maximized when NNSelect fetches five nearest neighbors using the approximate nearest neighbors index (and their out-going citations), as illustrated in Table 4 .

Table 4: OpenCorpus results for NNSelect step with varying number of nearest neighbors on 1,000 validation documents.

# of neighbors R@20 P@20 Time(ms) 1 0.123 0.079 131 5 0.142 0.080 144 10 0.138 0.069 200 50 0.081 0.040 362 Table 4 : OpenCorpus results for NNSelect step with varying number of nearest neighbors on 1,000 validation documents.

Self-citation bias. We hypothesized that a model trained with the metadata (e.g., authors) could be biased towards self-citations and other well-cited authors. To verify this hypothesis, we compared two NNRank models -one with metadata, and one without. We measured the mean and max rank of predictions that had at least one author in common with the query document. This experiment was performed with the OpenCorpus dataset.

A lower mean rank for NNRank + Metadata indicates that the model trained with metadata tends to favor documents authored by one of the query document's authors. We verified the prevalence of this bias by varying the number of predictions for each model from 1 to 100. Figure 3 shows that the mean and max rank of the model trained with metadata is always lower than those for the model that does not use metadata.

Figure 3: Mean and Max Rank of predictions with varying number of candidates.

6 Related Work

Citation recommendation systems can be divided into two categories -local and global. A local citation recommendation system takes a few sentences (and an optional placeholder for the candidate citation) as input and recommends citations based on the local context of the input sentences (Huang et al., 2015; He et al., 2010; Tang and Zhang, 2009; Huang et al., 2012; He et al., 2011) . A global citation recommendation system takes the entire scholarly article as input and recommends citations for the paper (McNee et al., 2002; Strohman et al., 2007; Nallapati et al., 2008; Kataria et al., 2010; Ren et al., 2014) . We address the global citation recommendation problem in this paper.

A key difference of our proposed method compared to previous work is that our method is content-based and works well even in the absence of metadata (e.g. authors, venues, key phrases, seed list of citations). Many citation recommendation systems crucially rely on a query document's metadata. For example, the collaborative filtering based algorithms of McNee et al. (2002) ; Jia and Saule (2017); require seed citations for a query document. (Ren et al., 2014; Yu et al., 2012) require authors, venues and key terms of the query documents to infer interest groups and to extract features based on paths in a heterogeneous graph. In contrast, our model performs well solely based on the textual content of the query document.

Some previous work (e.g. (Ren et al., 2014; Yu et al., 2012) ) have addressed the citation recommendation problem using graph-based methods. But, training graph-based citation recommendation models has been found to be expensive. For example, the training complexity of the ClusCite algorithm (Ren et al., 2014) is cubic in the number of edges in the graph of authors, venues and terms. This can be prohibitively expensive for datasets as large as OpenCorpus. On the other hand our model is a neural network trained via batched stochastic gradient descent that scales very well to large datasets (Bottou, 2010) .

Another crucial difference between our approach and some prior work in citation prediction is that we build up a document representation using its constituent words only. Prior algorithms (Huang et al., 2015 (Huang et al., , 2012 Nallapati et al., 2008; Tanner and Charniak, 2015) learn an explicit representation for each training document separately that isn't a deterministic function of the document's words. This makes the model effectively transductive since a never-before-seen document does not have a ready-made representation. Similarly, Huang et al. (2012) 's method needs a candidate document to have at least one in-coming citation to be eligible for citation -this disadvantages newly published documents. form document representations using citation relations, which are not available for unfinished or new documents. In contrast, our method does not need to be re-trained as the corpus of potential candidates grows. As long as the new documents are in the same domain as that of the model's training documents, they can simply be added to the corpus and are immediately available as candidates for future queries.

While the citation recommendation task has attracted a lot of research interest, a recent survey paper (Beel et al., 2016) has found three main concerns with existing work: (i) limitations in evaluation due to strongly pruned datasets, (ii) lack of details for re-implementation, and (iii) variations in performance across datasets. For example, the average number of citations per document in the DBLP dataset is 5, but Ren et al. (2014) filtered out documents with fewer than 10 citations from the test set. This drastically reduced the size of the test set. We address these concerns by releasing a new large scale dataset for future citation recommendation systems. In our experiments on the OpenCorpus dataset, we only prune documents with zero outgoing citations. We provide extensive details of our system (see Appendix §A) to facilitate reproducibility and release our code 10 . We also show in experiments that our method consistently outperforms previous systems on multiple datasets.

Finally, recent work has combined graph node representations and text-based document representations using CCA (Gupta and Varma, 2017) . This sort of approach can enhance our text-based document representations if a technique to create graph node representations at test-time is available.

7 Conclusion

In this paper, we present a content-based citation recommendation method which remains ro-10 https://github.com/allenai/citeomatic bust when metadata is missing for query documents, enabling researchers to do an effective literature search early in their research cycle or during the peer review process, among other scenarios. We show that our method obtains state of the art results on two citation recommendation datasets, even without the use of metadata available to the baseline method. We make our system publicly accessible online. We also introduce a new dataset of seven million scientific articles to facilitate future research on this problem.

A Hyperparameter Settings

Neural networks are complex and have a large number of hyperparameters to tune. This makes it challenging to reproduce experimental results. Here, we provide details of how the hyperparameters of the NNSelect and NNRank models were chosen or otherwise set. We chose a subset of hyperparameters for tuning, and left the rest at manually set default values. Due to limited computational resources, we were only able to perform hyperparameter tuning on the development split of the smaller DBLP and Pubmed datasets.

Table 5: Results of our BM25 implementation on DBLP and Pubmed datasets.

For DBLP and PubMed, we first ran Hyperopt 11 with 75 trials. Each trial was run for five epochs of 500,000 triplets each. The ten top performing of these models were trained for a full 50 epochs, and the best performing model's hyperparameters are selected. Hyperparameters for NNSelect were optimized for Recall@20 and those for the NNRank model were optimized for F1@20 on the development set. The selected values for DBLP are reported in Table 6 and for PubMed are reported in Table 7 .

Table 6: DBLP hyperparameter tuning results. Note that the dense dimension when using pretrained vectors is fixed to be 300. A ’-’ indicates that the variable was not tuned.
Table 7: PubMed hyperparameter tuning results. Note that the dense dimension when using pretrained GloVe vectors is fixed to be 300. A ’-’ indicates that the variable was not tuned.

OpenCorpus hyperparameters were set via informal hand-tuning, and the results are in Table 9 . A few miscellaneous parameters (not tuned) that are necessary for reproducibility are in Table 8 .

Table 8: Per-dataset parameters. These were hand-specified. *LazyAdamOptimizer is part of TensorFlow. **Nadam is part of Keras.
Table 9: Hyperparameters used for OpenCorpus

We briefly clarify the meaning of some parameters below:

• Margin Multiplier -The triplet loss has variable margins for the three types of negatives: 0.1γ, 0.2γ, and 0.3γ. We treat γ as a hyperparameter and refer to it as the margin multiplier.

• Use Siamese Embeddings -For the majority of our experiments, we use a Siamese model (Bromley et al., 1993) . That is, the textual embeddings for the query text and abstract share the same weights. However, we had a significantly larger amount of data to train NNRank on OpenCorpus, and found that non-Siamese embeddings are beneficial.

• Use Pretrained -We estimate word embeddings on the titles and abstracts of OpenCorpus using Word2Vec implemented by the gensim Python package 12 .

B Margin Loss Details

When computing the margins for the triplet loss, we use a boosting function for highly cited documents. The full triplet loss function is as follows:

max γα(d − ) + s(d q , d − ) + B(d − ) − s(d q , d + ) − B(d + ) , 0

where γ is the margin multiplier, and α(d − ) varies based on the type of negative document:

• α(d − ) = 0.3 for random negatives

• α(d − ) = 0.2 for nearest neighbor negatives

• α(d − ) = 0.1 for citation-of-citation negatives.

The boosting function is defined as follows: in-citations] 100 50

B(d) = σ d[

where σ is the sigmoid function and d [in-citations] is the number of times document d was cited in the corpus. The boosting function allows the model to slightly prefer candidates that are cited more frequently, and the constants were set without optimization.

C Nearest Neighbors For Training Details

When obtaining nearest neighbors for negative examples during training, we use a heuristic to find a subset of the fetched nearest neighbors that are sufficiently wrong. That is, these are non-citation samples that look dissimilar in the original text but similar in the embedding space. This procedure is as follows for each training query:

1. Compute the Jaccard similarities between a training query and all of its true citations using the concatenation of title and abstract texts.

2. Compute the bottom fifth percentile Jaccard similarity value. I.e. the value below which only the bottom 5% most least textually similar true citations fall. For example, if the Jaccard similarities range from 0.2 to 0.9, the fifth percentile might plausibly be 0.3.

3. Use the Annoy index computed at the end of the previous epoch to fetch nearest neighbors for the query document.

4. Compute the textual Jaccard similarity between all of the nearest neighbors and the query document.

5. Retain nearest neighbors that have a smaller Jaccard similarity than the fifth percentile. Using the previous example, retain the nearest neighbors that have a lower Jaccard similarity than 0.3. Okapi-BM25 is a popular ranking function. We use BM25 as an IR-based baseline for the task of citation recommendation. For the DBLP and Pubmed datasets, BM25 performance is provided in Ren et al. (2014) . To create a competitive BM25 baseline for OpenCorpus, we first created indexes for the DBLP and Pubmed datasets and tuned the query to approximate the performance reported in previous work. We used Whoosh 13 to create an index. We extract the key terms (using Whoosh's key terms from text method) from the title and abstract of each query document. The key terms from the document are concatenated to form the query string. Table 5 shows that our BM25 is a close approximation to the BM25 implementation of previous work and can be reliably used as a strong IR baseline for OpenCorpus. In Table 2 , we report results on all three datasets using our BM25 implementation.

E Key Phrases For Opencorpus

In the OpenCorpus dataset, some documents are accompanied by automatically extracted key phrases. Our implementation of automatic key phrase extraction is based on standard key phrase extraction systems -e.g. (Caragea et al., 2014a,b; Lopez and Romary, 2010) . We first extract noun phrases using the Stanford CoreNLP package (Manning et al., 2014) as candidate key phrases. Next, we extract corpus level and document level 13 https://pypi.python.org/pypi/Whoosh/ features (e.g. term frequency, document frequency, n-gram probability etc.) for each candidate key phrase. Finally, we rank the candidate key phrases using a ranking model that is trained on author-provided key phrases as gold labels. Table 10 compares NNRank with previous work in detail on DBLP and Pubmed datasets. Clus-Cite (Ren et al., 2014) clusters nodes in a heterogeneous graph of terms, authors and venues in order to find related documents which should be cited. ClusCite obtains the previous best results on these two datasets. L2-LR (Yu et al., 2012 ) uses a linear combination of meta-path based linear features to classify candidate citations. We show that NNRank (with and without metadata) consistently outperforms ClusCite and other baselines on all metrics on both datasets.

Table 10: Comparing NNRank with ClusCite. (Ren et al., 2014) have presented results on several other topicbased, link-based and network-based citation recommendation methods as baselines. For succinctness, we show results for the best system, Cluscite, and two baselines BM25 and L2-LR.

In order to increase the chances that all references are present in the list of candidates, the number of candidates must be significantly larger than the total number of citations of a document, but also significantly smaller than the number of documents in the corpus.

We tune K as a hyperparameter of our method.5 The magnitude-direction representation is based on Salimans and Kingma (2016) and was found to improve results in preliminary experiments, compared to the standard "direction-only" word representation.

Since the set of approximate neighbors depend on model parameters, we recompute a map from each query document to its K nearest neighbors before each epoch while training the document embedding model.

http://labs.semanticscholar.org/ corpus/

https://github.com/spotify/annoy 9 https://github.com/hyperopt/hyperopt

https://github.com/hyperopt/hyperopt 12 https://radimrehurek.com/gensim/