Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Benchmarking Hierarchical Script Knowledge

Authors

Abstract

Understanding procedural language requires reasoning about both hierarchical and temporal relations between events. For example, “boiling pasta” is a sub-event of “making a pasta dish”, typically happens before “draining pasta,” and requires the use of omitted tools (e.g. a strainer, sink...). While people are able to choose when and how to use abstract versus concrete instructions, the NLP community lacks corpora and tasks for evaluating if our models can do the same. In this paper, we introduce KidsCook, a parallel script corpus, as well as a cloze task which matches video captions with missing procedural details. Experimental results show that state-of-the-art models struggle at this task, which requires inducing functional commonsense knowledge not explicitly stated in text.

1 Introduction

The level of detail used in natural language communication varies: descriptive or instructive text for experts may elide over details the reader can seamlessly infer, while text for more novice audiences may be more verbose. A given document typically adheres to a single level of verbosity suited to its presumed audience (Grice, 1975) , so learning correspondences between abstract and detailed descriptions of similar concepts from text is a challenging problem.

Commonsense knowledge of how complex events decompose into stereotypical sequences of simpler events is a necessary component of a system that can automatically understand and reason about different types of discourse. Hierarchical correspondences between abstract and detailed representations of concepts and events were an important aspect of the original formulation of scripts for natural language understanding (Schank and 1. Put a large pot half full of water on the stove. 2. Turn the heat on under the pot and wait for the water to boil hard.

3. Pour the pasta into the boiling water.

Cook the pasta t0 < l a t e x i t s h a 1 _ b a s e 6 4 = " a p C C l 0 Q Q / p U A K L Q 3 u B G K S W Y y t u s = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t e 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A Q U j Z s = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " a p C C l 0 Q Q / p U A K L Q 3 u B G K S W Y y t u s = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t e 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A Q U j Z s = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " a p C C l 0 Q Q / p U A K L Q 3 u B G K S W Y y t u s = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t e 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A Q U j Z s = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " a p C C l 0 Q Q / p U A K L Q 3 u B G K S W Y y t u s = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t e 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A Q U j Z s = < / l a t e x i t >

… …

Drain the pasta 1. Put the strainer in the sink. 2. Once the pot with pasta is cool enough, grab it by the handles. 3. Pour the pasta and water into the strainer in the sink.

4. Pick up the strainer and shake it a little bit so more water comes out.

T1

< l a t e x i t s h a 1 _ b a s e 6 4 = "

F V a Q d j U W Z V L y U U a l b b T R 1 N M D o K M = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t + 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A W Y j Z w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "

F V a Q d j U W Z V L y U U a l b b T R 1 N M D o K M = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t + 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A W Y j Z w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "

F V a Q d j U W Z V L y U U a l b b T R 1 N M D o K M = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t + 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A W Y j Z w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "

F V a Q d j U W Z V L y U U a l b b T R 1 N M D o K M = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z 9 w D 4 9 a J s k 0 4 0 2 W y E R 3 Q m q 4 F I o 3 U a D k n V R z G o e S t 8 P x 7 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r p A f t + 3 6 1 6 N W 8 O s k r 8 g l S h Q K P v f v U G C c t i r p B J a k z X 9 1 I M c q p R M M m n l V 5 m e E r Z m A 5 5 1 1 J F Y 2 6 C f H 7 q l J x Z Z U C i R N t S S O b q 7 4 m c x s Z M 4 t B 2 x h R H Z t m b i f 9 5 3 Q y j 6 y A X K s 2 Q K 7 Z Y F G W S Y E J m f 5 O B 0 J y h n F h C m R b 2 V s J G V F O G N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z Z H O i / P u f C x a S 0 4 x c w x / 4 H z + A A W Y j Z w = < / l a t e x i t > Figure 1 : An example KIDSCOOK sequence with multiple types of hierarchy and abstraction: the example contains sequences of complex instructions, given both as sentences and sequences of simpler instructions. Abelson, 1977; DeJong, 1981) but required handwritten data structures encoding world knowledge. However, the automatic induction of such commonsense knowledge from open-domain noisy text corpora remains an open problem (Chambers, 2013; Weber et al., 2018; Zellers et al., 2018) . As a step towards solving this problem we consider textual descriptions of actions in a cooking domain. We introduce a dataset, KIDSCOOK, targeted at exploring the automatic acquisition of correspondences between abstract and concrete descriptions of actions. The dataset consists of higher-level single-sentence imperative descriptions paired with lower-level descriptions with elided details included. Descriptions come from real grounded actions, built on top of the YouCookII video caption dataset . Figure 1 gives an example annotation from the dataset: the phrase "drain the pasta," presented to an annotator with its corresponding video clip, was annotated as corresponding to four constituent steps appropriate as instruction for a child. The constituent steps are "simpler" in the sense that they correspond to more atomic actions, but not necessarily in their linguistic complexity. We identify over 1,500 procedures and tools which KIDSCOOK makes explicit but are assumed as commonsense world knowledge by YouCookII.

Figure 1: An example KIDSCOOK sequence with multiple types of hierarchy and abstraction: the example contains sequences of complex instructions, given both as sentences and sequences of simpler instructions.

The KIDSCOOK dataset allows us to learn mappings between abstract and concrete descriptions via sequence-to-sequence prediction. We apply several standard neural sequence-to-sequence models; however, since these models do not expose explicit, interpretable correspondences between abstract and concrete descriptions, we also propose the application of neural transduction models which capture correspondences with latent hard alignment variables. We define a cloze-style evaluation to complement our dataset, in which models must predict the values of held-out tokens which target knowledge of tool usage, temporal ordering, and kitchen commonsense. We find that our neural transduction models are able to match the predictive power of traditional neural sequence models while providing interpretable alignments between abstract and concrete subsequences useful for our primary goal of analysis of implicit hierarchical script knowledge.

2 Data & Task

Our approach situates script learning as a case of grounding. For simplicity of exposition, let us assume there are three levels of abstraction to grounding: abstract → concrete → motor control. Most prior work in grounding treats language monolithically 1 and ignores the issue of audience. In practice, this means the task formulation or exposed API may implicitly bias the language to be more concrete. By viewing the task as purely linguistic, we have no API or robot that constrains our language; instead, we define our audience as children. By eliciting child-directed instructions, we collect concrete language capturing otherwise implicit world knowledge that a child would not know. Because annotators assume a smart and capable but uninformed listener, we posit this language corresponds closely to the most "concrete" form in which language naturally occurs.

2.1 Data Collection

We construct a task on Amazon's Mechanical Turk, where workers are asked to explain a video action caption to a child. 2 Every instruction is paired with the original YouTube video and YouCook caption so the annotator could see how 1 Notable exceptions include the hierarchical instructions of (Regneri et al., 2013) and (Bisk et al., 2016 the action was performed, rather than hallucinating additional details. All captions received three simplifications. The instructions ask users to focus on missing information and allow them up to five steps. Finally, we explicitly asked annotators to simplify complex actions (e.g. dice) that can be defined by a series of more basic actions (e.g. cut).

Our KIDSCOOK corpus statistics are shown in Table 1 . In total we collected over 10K action sequences (∼400K tokens). The average caption is approximately 4x longer than a YouCook caption. Most importantly 1,536 lemmas and 2,316 lexical types from KIDSCOOK's vocabulary do not appear in any of the original captions. This indicates that there are over 1,500 new concepts, tools, and procedures that were assumed by YouCookII but are now explicit in KIDSCOOK.

Table 1: KIDSCOOK corpus statistics

2.2 Cloze Task

To investigate what new knowledge is being introduced and whether a model has captured it, we construct a cloze-style slot-filling task (Chambers, 2017; Hermann et al., 2015) . We drop key content words from the concrete realization of an abstract instruction and ask the model to predict them. Several examples from the validation set are shown in Table 2 . Correctly predicting the missing words requires knowledge of the manner of executing a task and the tools required.

Table 2: Example abstract/concrete pairs with blanks (red) where predictions and surprisal are computed.

To choose candidate words to drop, we only allow words that occur primarily in the concrete instructions. Additionally, we do not drop stop words, numbers, or words occurring fewer than five times. We do, however, drop units of measure (cup, minute, etc.). This ensures we create blanks whose answers are previously omitted concrete details. Relatedly, under this filter the answer to a blank is very rarely an ingredient, as our goal is not to memorize recipes, but to infer the tool knowledge necessary to execute them. In total, we whitelist ∼1,000 words that can be dropped to create blanks. We prefer longer blanks when available to give preference to compound nouns (e.g. wire whisk). Finally, we do not drop any words ABS chop garlic into small pieces . CON put garlic on cutting board. press on back of knife with hand, cutting into small pieces.

ABS add some parmesan cheese into the bowl and mix them well. CON use a grater to grate some parmesan cheese into the bowl. use a wire whisk to stir the cheese in.

ABS add the tofu to the wok. CON drain the water from the tofu using a strainer. add the tofu into the pan. use a spoon to stir the tofu in the mixture. from the concrete sentence if they occur in the abstract description. This restriction eliminates any benefits that might have been achieved via models with copy mechanisms. Examples that do not meet our criteria are removed from the corpus.

3 Models

We investigate the utility of sequence-to-sequence models with attention (Bahdanau et al., 2015) to generate concrete realizations of abstract task descriptions. We hypothesize that models that learn explicit alignments are particularly amenable to interpretable analysis on the task. Therefore, in addition to using the global attention model of (Luong et al., 2015) , we adapt the transducer model proposed by Yu et al. (2016) , which uses learned latent discrete variables to model phraseto-phrase alignments. In contrast to many standard neural models, this approach enables us to incorporate prior knowledge about the alignment structure, and to extract interpretable alignments between task phrases. Closely related architectures have been proposed for segmental sequence modeling (Wang et al., 2017) and phrase-based neural machine translation (Huang et al., 2018) .

We train the transducer models using Viterbi EM (after doing marginal likelihood training for the initial iterations), as we found it gave higher predictive accuracy than marginal likelihood training only. Following Yu et al. (2016) we experiment with both a fixed alignment transition probability model and a transition model with a neural parameterization. Cloze task prediction is performed greedily. 3 At each slot the Viterbi alignment of the prefix of the sequence up to that slot is computed. See appendix 7 for model details. 4 We also evaluate the performance of a language modelling baseline and a seq2seq model without attention (Sutskever et al., 2014) , to compare the effect of not modeling alignment at all.

We expect all the models to implicitly capture aspects of world knowledge. However, the discrete latent variable models provide Viterbi alignments over the training data, from which we can compile a look-up table with the extracted knowledge. In neural attention models, this knowledge is only weakly recoverable: extracting information requires hand tuning attention thresholds and there is no direct way to extract contiguous alignments for multi-word phrases.

4.1 Evaluation Metrics

During generation, we provide the model with the number of words in each blank to be predicted. We consider two setups for evaluating examples with multiple blanks, both assuming that predictions are made left-to-right: Oracle, where the gold prediction of each blank is fed into the model to condition on for future predictions, and Greedy, where the model prediction is used for future predictions. We compute the proportion of exact word matches over each blank and the precision of the top k = 5 predictions for both setups. Additionally we compute the average surprisal of the gold prediction (conditioning on oracle predictions). The surprisal of a word (Attneave, 1959; Hale, 2001) is its negative log probability under the model: −log(P (w i |w 1:i−1 )). The higher the probability of the ground truth, the lower the model's "surprise" at seeing it in that context.

Finally, as a quantitative proxy for interpretability, we report the length of the transducer models' average Viterbi alignment span: our goal is a model which balances low average alignment lengths and high matching or ranking scores.

4.2 Cloze Task Results

We report results on the prediction task in Table 4 . First we consider models trained only on our dataset: All the models that incorporate a notion of alignment do substantially better than those who do not. We see that our transducer model with fixed alignment transition probabilities performs best in terms of predictive accuracy (exact match and top-5 precision), while the seqseq model with attention is the next best in most comparisons. The model with parameterized transitions has the lowest surprisal though, as it is more confident about the alignment predictions it is making. Using average alignment length to quantify whether the phrase alignments exhibit desirable structure, we see that the alignments found by the unparameterized transition model (average length 6.18) are significantly shorter than those of the parameterized model (average length 16.61). Investigation shows that the paramaterized model mostly learns degenerate alignments, aligning most of the concrete sequence to either the start or end of the abstract sentence. In contrast, qualitative analysis of the unparameterized transition model show that its alignments learn desirable correspondences (see Figure 2 ). Therefore among our proposed models (trained on in-domain data only) the transducer with unparameterized transitions satisfies our desiderata of displaying both good predictive power for word generation, and learning interpretable alignments.

Figure 2: Example Viterbi alignments

Given the recent success of massively pre- ...

< l a t e x i t s h a 1 _ b a s e 6 4 = " U l 1 z j H M k 9 x Z A 8 o y R X J L J O x n 4 T Q w = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r

Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z + o H h 6 1 T J J p x p s s k Y n u h N R w K R R v o k D J O 6 n m N A 4 l b 4 f j 2 5 n f f u L a i E Q 9 4 i T l Q U y H S k S C U b T S g + u 6 / W r N c 7 0 5 y C r x C 1 K D A o 1 + 9 a s 3 S F g W c 4 V M U m O 6 v p d i k F O N g k k + r f Q y w 1 P K x n T I u 5 Y q G n M T 5 P N T p + T M K g M S J d q W Q j J X f 0 / k N D Z m E o e 2 M 6 Y 4 M s v

M k 9 x Z A 8 o y R X J L J O x n 4 T Q w = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z + o H h 6 1 T J J p x p s s k Y n u h N R w K R R v o k D J O 6 n m N A 4 l b 4 f j 2 5 n f f u L a i E Q 9 4 i T l Q U y H S k S C U b T S g + u 6 / W r N c 7 0 5 y C r x C 1 K D A o 1 + 9 a s 3 S F g W c 4 V M U m O 6 v p d i k F O N g k k + r f Q y w 1 P K x n T I u 5 Y q G n M T 5 P N T p + T M K g M S J d q W Q j J X f 0 / k N D Z m E o e 2 M 6 Y 4 M s v

M k 9 x Z A 8 o y R X J L J O x n 4 T Q w = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z + o H h 6 1 T J J p x p s s k Y n u h N R w K R R v o k D J O 6 n m N A 4 l b 4 f j 2 5 n f f u L a i E Q 9 4 i T l Q U y H S k S C U b T S g + u 6 / W r N c 7 0 5 y C r x C 1 K D A o 1 + 9 a s 3 S F g W c 4 V M U m O 6 v p d i k F O N g k k + r f Q y w 1 P K x n T I u 5 Y q G n M T 5 P N T p + T M K g M S J d q W Q j J X f 0 / k N D Z m E o e 2 M 6 Y 4 M s v e T P z P 6 2 Y Y X Q e 5 U G m G X L H F o i i T B B M y + 5 s M h O Y M 5 c Q S y r S w t x I 2 o p o y t O l U b A j + 8 s u r p H X h + p 7 r 3 1 / W 6 j d F H G U 4 g V M 4 B x + u o A 5 3 0 I A m M B j C M 7 z C m y O d F + f d + V i 0 l p x i 5 h j + w P n 8 A U v

M k 9 x Z A 8 o y R X J L J O x n 4 T Q w = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 H n f T m l t f W N z q 7 x d 2 d n d 2 z + o H h 6 1 T J J p x p s s k Y n u h N R w K R R v o k D J O 6 n m N A 4 l b 4 f j 2 5 n f f u L a i E Q 9 4 i T l Q U y H S k S C U b T S g + u 6 / W r N c 7 0 5 y C r x C 1 K D A o 1 + 9 a s 3 S F g W c 4 V M U m O 6 v p d i k F O N g k k + r f Q y w 1 P K x n T I u 5 Y q G n M T 5 P N T p + T M K g M S J d q W Q j J X f 0 / k N D Z m E o e 2 M 6 Y 4 M s v e T P z P 6 2 Y Y X Q e 5 U G m G X L H F o i i T B B M y + 5 s M h O Y M 5 c Q S y r S w t x I 2 o p o y t O l U b A j + 8 s u r p H X h + p 7 r 3 1 / W 6 j d F H G U 4 g V M 4 B x + u o A 5 3 0 I A m M B j C M 7 z C m y O d F + f d + V i 0 l p x i 5 h j + w P n 8 A U v k j S I = < / l a t e x i t > e0

< l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S

V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4

x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S

V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4

x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S

V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4

x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S

V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v

+ g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C

8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4 x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r

Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4 x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 N S V A 7 X C 5 w l a 7 N G f X i t f H c X t Z N 8 = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / o h 6 9 L B b B U 0 l E 0 G P R i 8 e K 9 g P a U D b b S b t 0 s w m 7 G 6 G E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r

Q 8 G H u / N M D M v T A X X x v O + n d L a + s b m V n m 7 s r O 7 t 3 / g H h 6 1 d J I p h k 2 W i E R 1 Q q p R c I l N w 4 3 A T q q Q x q H A d j i + n f n t J 1 S a J / L R T F I M Y j q U P O K M G i s 9 Y N / r u 1 W v 5 s 1 B V o l f k C o U a P T d r 9 4 g Y V m M 0 j B B t e 7 6 X m q C n C r D m c B p p Z d p T C k b 0 y F 2 L Z U 0 R h 3 k 8 1 O n 5 M w q A x I l y p Y 0 Z K 7 + n s h p r P U k D m 1 n T M 1 I L 3 s z 8 T + v m 5 n o O s i 5 T D O D k i 0 W R Z k g J i G z v 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p 2 K D c F f f n m V t C 5 q v

l f z 7 y + r 9 Z s i j j K c w C m c g w 9 X U I c 7 a E A T G A z h G V 7 h z R H O i / P u f C x a S 0 4 x c w x / 4 H z + A O 0 r j Y w = < / l a t e x i t > e6

< l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 2 M f j m J D q M 2 1 J 4 r d 2 G a r P c w F A s o = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E q s e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b

/ + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g P 1 a v 1 x x q + 4 c Z J V 4 O a l A j k a / / N U b x C y N U B o m q N Z d z 0 2 M n 1 F l O B M 4 L f V S j Q l l Y z r E r q W S R q j 9 b H 7 q l J x Z Z U D C W N m S h s z V 3 x M Z j b S e R I H t j K g Z 6 W V v J v 7 n d V M T X v s Z l 0 l q U L L F o j A V x M R k 9 j c Z c I X M i I k l l C l u b y V s R B V l x q Z T s i F 4 y y + v k t Z F 1 X O r 3 v 1 l p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n C e

Table 3: Example Viterbi Alignments. For concrete to abstract, we match any phrase containing the word(s).
Table 4: Results on the Cloze prediction task (Match = Exact Match, Top-5 = Precision of Top 5 predictions, Surp = Surprisal). Transducer results are reported for models with unparameterized and parameterized (+ParamTran) alignment transition models. The best and second best results are emphasized.

X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H / Z D j Z I = < / l a t e x i t > Figure 2 : Example Viterbi alignments trained language models (Peters et al., 2018) , we are interested if these approaches transfer to our cloze task. We evaluate the OpenAI GPT transformer language model (Radford et al., 2018) with and without fine-tuning.Without fine-tuning this model does slightly worse than our best domainspecific model. With fine-tuning, its accuracy is substantially higher, but it still suffers from the same fundamental limitations as our other models (see Table 5 ). The transformer (Vaswani et al., 2017) attention is multi-headed and multi-layered which prohibits direct interpretability.

Table 5: Above is the output of OpenAI GPT when forced to greedily decode answers to blanks in validation.

5 Qualitative Analysis

We visualize alignments of our transduction model over two partial sequences in Fig. 2 . This shows which hidden vector of the abstract sentence aligned to every region of the concrete sequence. Specifically, we see how tools like the big bowl, spoon, and tongs are introduced to facilitate the actions. There are also implications, e.g. that high indicates grill. For further analysis we extract alignments over the training corpus, linking each decoded phrase with the word from the encoding it used during generation. We then aggregate these tuples into a table which we can filter (based on our whitelist) and sort (with PMI). This process is imprecise as it discards the context in which the alignment occurs, but it nonetheless extracts many Abs shape each dough ball into a circle and add tomato sauce . Pred flatten out your dough into a flat circle using your hands. take a knife to add tomato sauce to the center of your dough.

use the back side of the knife to cut the sauce out. make sure you keep the sauce about an inch from the edges. Gold flatten out your dough into a flat circle using your hands. take a spoon to add tomato sauce to the center of your dough.

use the back side of the spoon to spread the sauce out. make sure you keep the sauce about an inch from the edges.

Abs place the kale cucumber bell peppers carrots and radishes on the wrapper . Pred put the cut on a cutting . put a cutting amount of kale on the cutting . add a cut amount of cucumber ... Gold put the wrap on a plate . put a small amount of kale on the wrap . add a small amount of cucumber ...

Abs

wrap the pizza . Pred find a large piece to put the pizza om . place the pizza in the center for it not to stick around . grab the plastic wrap and start wrapping the entire thing and pizza . wrap all around until completely covered on all corners . put in freezer on a cold water and freeze overnight Gold find a hard surface to put the pizza om . place the pizza in the center for it not to slide around . grab the plastic wrap and start wrapping the hard surface and pizza . wrap all around until fully covered on all corners . put in freezer on a flat surface and freeze overnight Table 5 : Above is the output of OpenAI GPT when forced to greedily decode answers to blanks in validation.

of the phenomena we would hope to see ( Table 3) . The left-hand side of the table shows words from the abstract YouCook annotations and corresponding phrases in the concrete annotation. For the righthand side we searched for common concrete terms that may be preceded or followed by other terms, and present the abstract terms they were most often generated by.

Finally, Table 5 shows three randomly chosen examples (from the validation set) of greedy decodings for slot filling with GPT fine-tuned on our dataset. These examples demonstrate that, first, there are cases where GPT is successful or produces a semantically valid answer (e.g. fully vs completely). Second, as is common with greedy decoding, the model can get stuck in a loop (e.g. cut, cutting, cutting, ...). Finally, note there are nonsensical cases where the model appears to have discarded the abstract context (e.g. knife to add tomato sauce or freezer on a cold water).

6 Related Work

Many script learning systems are based on event co-occurrence and language modeling in large text corpora, and can infer implicit events without creating explicit situation-specific frame structures (Chambers and Jurafsky, 2008; Rudinger et al., 2015; Pichotta and Mooney, 2016) . Other systems induce situation-specific frames from text (Cheung et al., 2013; Balasubramanian et al., 2013) . However, these methods do not explicitly target the commonsense correspondence between differing levels of detail of complex events.

Most relevant to this paper is the pioneering work of Regneri et al. (2013) as extended by Senina et al. (2014) and .

These papers present the TACOS corpus, consisting of natural language descriptions of activities in videos paired with low-level activity labels. Senina et al. (2014) collect an additional level of multi-sentence annotations on the corpus, which allowing for video caption generation at multiple levels of detail. describe a similar corpus of natural descriptions of composite actions, useful for activity recognition in video. These corpora differ in a number of important ways from KIDSCOOK; in particular, the language has somewhat limited complexity and "naturalness" when describing complex scenarios, a phenomenon also observed in the robotics literature (Scalise et al., 2018) . Our data collection process avoids more formulaic language by eliciting "child-directed" descriptions.

7 Conclusion

We introduce a new hierarchical script learning dataset and cloze task in which models must learn commonsense world knowledge about tools, procedures and even basic physics to perform well. Our aim is to begin a conversation about abstraction in language, how it is modeled, and what is implicitly hidden. Our abstract and concrete instructions are grounded in the same videos yet differ dramatically due to their assumed audiences. We show that a neural transduction model produces interpretable alignments for analyzing these otherwise latent correlations and phenomena.

A Transducer Model

We briefly describe the model of Yu, Buys, and Blunsom (2016) and our minor modifications thereto.

A.1 Alignment With Latent Variables

We model the conditional probability of a concrete sequence y given abstract sequence x through a latent alignment variable a between x and y, which is a sequence of variables a j , with a j = i signifying that y j is aligned to x i . The marginal probability of y given x is p(y|x) = a p(y, a|x).

(1)

In the following, we use m to denote the length of x and n to denote the length of y. The model formulation restricts alignments to be monotonic, i.e. a j+1 ≥ a j for all j.

The model factorizes over timesteps into alignment and word prediction probabilities, such that the word prediction at each timestep is informed by its alignment:

p(y, a|x) = j p(a j |a j−1 , x 1:a j−1 , y 1:j−1 ) × p(y j |a j , x 1:a j , y 1:j−1 ) (2)

The abstract and concrete sequences are both encoded with LSTM Recurrent Neural Networks (Hochreiter and Schmidhuber, 1997) . In contrast to standard attention-based models, the aligned encoder representation is not fed into the decoder RNN state, but only used to make next word predictions. Due to the small size of the training data, words in both sequences are embedded using fixed GloVe embeddings (Pennington et al., 2014) . The word emission probability is then defined as

p(yj|aj, x1:a j , y1:j−1) = softmax(MLP(ea j , dj)) (3)

with e the encoder hidden states and d the decoder hidden states.

The alignment probability factorizes into shift and emit probabilities, where a shift action increments the alignment to the next word in the input sequence, and an emit action generates the next output word. We refer to these as transition probabilities. This formulation enables us to restrict the hard alignment to be monotonic.

We consider two parameterizations of this distribution. In the first, the probabilities are parameterized by the neural network, using the encoder and decoder hidden state in a similar manner to how the word emission probability was computed. The alignment probability at a given timestep is therefore parameterized as

p(aj|aj−1, x1:a j−1 ,y1:j−1) = p(emit|aj, x1:a j , y1:j−1) × a j −1 i=a j−1 p(shift|i, x1:i, y1:j−1), (4) where p(shift|i, x1:i, y1:j−1) = σ(M LP (ei, dj)), (5) p(emit|i, x1:i, y1:j−1) = 1 − p(shift|i, x1:i, y1:j−1). (6)

We also consider using the simpler, fixed alignment parameterization in Yu, Buys, and Blunsom (2016) , where the transition probability is conditioned only on sequence length, not on x or y, and can therefore be estimated using the ratio between input and output sentence lengths. The alignment probabilities are not updated during training, and consequently the posterior distribution over the alignments is biased towards this prior, favoring alignments close to the diagonal.

The parameterized alignment model contains as special cases two degenerate solutions: (1) an unconditional language model and (2) a seq2seq model. These occur if the model performs all emits before shifting or all shifts before emitting, respectively. To prevent the creation of a language model we force the last output word to be aligned to the last word in the abstract sequence, similar to Yu et al. (2017) . However, the parameterized transition model could still in practice revert to a pure sequence-to-sequence model.

A.2 Marginalization

Next we briefly describe the dynamic program used to marginalize over alignments during training and to find the most likely alignments of a given alignment during inference; we refer the reader to Yu, Buys, and Blunsom (2016) for a more thorough treatment.

The forward variable α i (j) representing p(y 1:j , a j = i|x 1:i ) is recursively as αi(j) = p(yj|i, x1:i, y1:j−1) × i k=1 α k (j − 1)p(aj = i|k, x 1:k , y1:j−1). 7The marginal likelihood objective is to train the model to optimize α m (n) = p(y 1:n , a n = m|x 1:m ). The gradients are computed with automatic differentiation; as this is is equivalent to using the forward-backward algorithm to estimate the gradients (Eisner, 2016) , only the forward algorithm has to be implemented.

To make the implementation GPU-efficient, we vectorize the computation of α. The computation iterates through decoding steps, each of which can be generated from an alignment to any of the encoder tokens. We can efficiently construct a transition matrix T , corresponding to all possible encoder states performing all possible shifts, and emission matrix E j which is a gather by word index j.

To compute the forward probabilities at each timestep, the current forward probabilities are first multiplied by all possible transitions. A sum in logspace collapses all paths, and the emission (word generation) probabilities are multiplied to obtain the new forward probabilities. When fixed transition probabilities are used, T is precomputed.

A.3 Viterbi Em Training

Latent variable models can be trained either through directly optimizing the likelihood objective through gradient descent (as described above), or with the Expectation Maximization (EM) algorithm (Dempster et al., 1977) , which alternates between calculating expectations over the values of the latent variables given the current parameters, and maximizing the expected complete data log likelihood given those expectations. We consider training our alignment model with Viterbi EM (Brown et al., 1993) , also known as "hard" EM, where at each iteration the most likely assignment of the hidden variables (alignments) are found and the parameters are updated to optimize the log likelihood given those alignments. Viterbi EM has been shown to give superior performance to standard EM on unsupervised parsing (Spitkovsky et al., 2010), due to better convergence properties in practice by making the distribution more peaked.

We perform batched Viterbi EM training by computing the Viterbi alignments for a batch, and then performing a gradient step based on treating those alignments as observations.

We follow a two-stage training procedure: we first directly optimize the marginal likelihood with batched SGD to find a reasonable initial distribu-tion over alignments, before switching to Viterbi EM training. Such a strategy has been shown to reduce the chance that the model will get stuck in local optima (Spitkovsky et al., 2011) .

A.4 Inference

We apply the trained models to multiple inference problems to evaluate how well they are capturing script knowledge. The first is finding the most likely alignment given a pair of abstract and concrete sequences. We use the standard Viterbi algorithm, in which we replace the sum in equation 7with max, and keep track of the index corresponding to each value of α during the forward computation. The most likely alignment can then be traced back from a n = m.

The second inference problem is slot-filling, for application to the cloze task. Given an abstract sentence and a partially-filled concrete sequence, we want to use the model to predict words to fill the given blanks. To make the prediction, we sample 5 candidate sequences by predicting words for each slot, in left-to-right order, and then choosing the sequence with the highest overall probability. Words are predicted by sampling with temperature 0.1, in order to peak the distribution while still allowing some diversity in the samples. The motivation for selecting the final output from multiple samples is that the original samples are biased, as they are only conditioned on the left context.

At the start of the prediction for each slot, the Viterbi alignment of the prefix of the sequence up to the start of that slot is re-predicted, independent of previous alignment predictions. Consequently alignment decisions can be revised, and the slot alignments are no longer constrained to be monotonic, which makes the slot prediction model more flexible. For the parameterized transition model, the slot alignment is predicted greedily by incrementing the last predicted alignment while the shift probability is greater than 0.5. The fixed transition model assumes that the alignment of the word preceding the slot is shared across the slot.

During preliminary experiments beam search did not improve performance.4 All code and data is available at https://github. com/janmbuys/ScriptTransduction.