Ground-truth labels matter: A deeper look into input-label demonstrations KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim arXiv preprint arXiv:2205.12685, 2022 | 64* | 2022 |
Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator HJ Kim, H Cho, J Kim, T Kim, KM Yoo, S Lee Workshop on Large-scale Pre-trained Language Models 2022 (NAACL Workshop), 2022 | 28 | 2022 |
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023 | 13 | 2023 |
Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning H Cho, C Park, J Kim, HJ Kim, KM Yoo, S Lee arXiv preprint arXiv:2301.11660, 2023 | 1 | 2023 |
Aligning Language Models to Explicitly Handle Ambiguity HJ Kim, Y Kim, C Park, J Kim, C Park, KM Yoo, S Lee, T Kim arXiv preprint arXiv:2404.11972, 2024 | | 2024 |
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP HJ Kim, H Cho, SW Lee, J Kim, C Park, S Lee, KM Yoo, T Kim Findings of the Association for Computational Linguistics: EMNLP 2023, 5888–5905, 2023 | | 2023 |