追蹤
Tzu-Quan Lin
Tzu-Quan Lin
在 ntu.edu.tw 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Superb@ slt 2022: Challenge on generalization and efficiency of self-supervised speech representation learning
T Feng, A Dong, CF Yeh, S Yang, TQ Lin, J Shi, KW Chang, Z Huang, ...
IEEE SLT 2022, 1096-1103, 2023
342023
Melhubert: A simplified hubert on mel spectrograms
TQ Lin, H Lee, H Tang
IEEE ASRU 2023, 1-8, 2023
202023
Compressing transformer-based self-supervised models for speech processing
TQ Lin, TH Yang, CY Chang, KM Chen, T Feng, H Lee, H Tang
Submitted to TASLP, 2022
62022
Dynamic-superb phase-2: A collaboratively expanding benchmark for measuring the capabilities of spoken language models with 180 tasks
C Huang, WC Chen, S Yang, AT Liu, CA Li, YX Lin, WC Tseng, A Diwan, ...
arXiv preprint arXiv:2411.05361, 2024
32024
On the social bias of speech self-supervised models
YC Lin, TQ Lin, HC Lin, AT Liu, H Lee
INTERSPEECH 2024, 2024
32024
Building a taiwanese mandarin spoken language model: A first attempt
CK Yang, YK Fu, CA Li, YC Lin, YX Lin, WC Chen, HL Chung, CY Kuan, ...
arXiv preprint arXiv:2411.07111, 2024
12024
Property Neurons in Self-Supervised Speech Transformers
TQ Lin, GT Lin, H Lee, H Tang
IEEE SLT 2024, 2024
2024
Listen and Speak Fairly: A Study on Semantic Gender Bias in Speech Integrated Large Language Models
YC Lin, TQ Lin, CK Yang, KH Lu, WC Chen, CY Kuan, H Lee
IEEE SLT 2024, 2024
2024
DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models
TQ Lin, H Lee, H Tang
INTERSPEECH 2024, 2024
2024
系統目前無法執行作業,請稍後再試。
文章 1–9