Text Span的评估指标: For text-span questions whose answer is string(s), we need to compare the predicted string(s) with the ground truth answer string(s) (i.e., the correct answer). RCstyle QA task generally uses evaluation metrics Exact Match (EM) and F1 score (F1) proposed by Rajpurkar et al. [94] for text-span questions [104, 116]. EM assigns credit 1.0 to questions whose predicted answer is exactly the same as the ground truth answer and 0.0 otherwise, so the computation of EM is the same as the metric Accuracy but for different categories of RC-style QA. F1 measures the average word overlap between the predicted answer and the ground truth answer. These two answers are both considered as bag of words with lower cases and ignored the punctuation and articles “a”, “an” and “the”. For example, the answer “The Question Answering System” is treated as a set of words {question, answering, system}. Therefore, F1 of each text-span question can be computed at word-level by Equation 2.2