Follow
Wei-Tsung Kao
Title
Cited by
Cited by
Year
Utilizing Self-supervised Representations for MOS Prediction
WC Tseng, C Huang, WT Kao, YY Lin, H Lee
arXiv preprint arXiv:2104.03017, 2021
632021
DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores
WC Tseng, WT Kao, H Lee
arXiv preprint arXiv:2204.03219, 2022
202022
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
WT Kao, HY Lee
Findings of the Association for Computational Linguistics: EMNLP 2021, 2195–2208, 2021
182021
Membership Inference Attacks Against Self-supervised Speech Models
WC Tseng, WT Kao, H Lee
arXiv preprint arXiv:2111.05113, 2021
162021
On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting
WT Kao, YK Wu, CP Chen, ZS Chen, YP Tsai, HY Lee
arXiv preprint arXiv:2204.00352, 2022
82022
Further boosting BERT-based models by duplicating existing layers: Some intriguing phenomena inside BERT
WT Kao, TH Wu, PH Chi, CC Hsieh, HY Lee
arXiv e-prints, arXiv: 2001.09309, 2020
62020
BERT's output layer recognizes all hidden layers? Some Intriguing Phenomena and a simple way to boost BERT
WT Kao, TH Wu, PH Chi, CC Hsieh, HY Lee
arXiv preprint arXiv:2001.09309, 2020
42020
J-ReCoVer: Java Reducer Commutativity Verifier
YF Chen, CY Chiang, L Holík, WT Kao, HH Lin, T Vojnar, YF Wen, WC Wu
Asian Symposium on Programming Languages and Systems, 357-366, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–8