Utilizing Self-supervised Representations for MOS Prediction WC Tseng, C Huang, WT Kao, YY Lin, H Lee arXiv preprint arXiv:2104.03017, 2021 | 63 | 2021 |
DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores WC Tseng, WT Kao, H Lee arXiv preprint arXiv:2204.03219, 2022 | 20 | 2022 |
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability WT Kao, HY Lee Findings of the Association for Computational Linguistics: EMNLP 2021, 2195–2208, 2021 | 18 | 2021 |
Membership Inference Attacks Against Self-supervised Speech Models WC Tseng, WT Kao, H Lee arXiv preprint arXiv:2111.05113, 2021 | 16 | 2021 |
On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting WT Kao, YK Wu, CP Chen, ZS Chen, YP Tsai, HY Lee arXiv preprint arXiv:2204.00352, 2022 | 8 | 2022 |
Further boosting BERT-based models by duplicating existing layers: Some intriguing phenomena inside BERT WT Kao, TH Wu, PH Chi, CC Hsieh, HY Lee arXiv e-prints, arXiv: 2001.09309, 2020 | 6 | 2020 |
BERT's output layer recognizes all hidden layers? Some Intriguing Phenomena and a simple way to boost BERT WT Kao, TH Wu, PH Chi, CC Hsieh, HY Lee arXiv preprint arXiv:2001.09309, 2020 | 4 | 2020 |
J-ReCoVer: Java Reducer Commutativity Verifier YF Chen, CY Chiang, L Holík, WT Kao, HH Lin, T Vojnar, YF Wen, WC Wu Asian Symposium on Programming Languages and Systems, 357-366, 2019 | | 2019 |