Efficient attentions for long document summarization L Huang, S Cao, N Parulian, H Ji, L Wang arXiv preprint arXiv:2104.02112, 2021 | 163 | 2021 |
CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization S Cao, L Wang arXiv preprint arXiv:2109.09209, 2021 | 149 | 2021 |
Controllable open-ended question generation with a new question type ontology S Cao, L Wang arXiv preprint arXiv:2107.00152, 2021 | 34 | 2021 |
HIBRIDS: Attention with hierarchical biases for structure-aware long document summarization S Cao, L Wang arXiv preprint arXiv:2203.10741, 2022 | 32 | 2022 |
Inference time style control for summarization S Cao, L Wang arXiv preprint arXiv:2104.01724, 2021 | 17 | 2021 |
Attention head masking for inference time content selection in abstractive summarization S Cao, L Wang arXiv preprint arXiv:2104.02205, 2021 | 14 | 2021 |
Time-aware prompting for text generation S Cao, L Wang arXiv preprint arXiv:2211.02162, 2022 | 10 | 2022 |
AWESOME: GPU Memory-constrained Long Document Summarization using Memory Mechanism and Global Salient Content S Cao, L Wang arXiv preprint arXiv:2305.14806, 2023 | 1 | 2023 |
BUMP: A benchmark of unfaithful minimal pairs for meta-evaluation of faithfulness metrics L Ma, S Cao, IV Logan, L Robert, D Lu, S Ran, K Zhang, J Tetreault, ... arXiv preprint arXiv:2212.09955, 2022 | 1 | 2022 |
Multi-View Source Ablation for Faithful Summarization S Cao, L Ma, D Lu, RL Logan IV, J Tetreault, A Jaimes Findings of the Association for Computational Linguistics: EACL 2023, 2029-2047, 2023 | | 2023 |