追蹤
Niklas Muennighoff
Niklas Muennighoff
在 stanford.edu 的電子郵件地址已通過驗證 - 首頁
標題
引用次數
引用次數
年份
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
JMLR 2023, 2022
1626*2022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
11262022
StarCoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
TMLR 2023, 2023
818*2023
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
6182022
A framework for few-shot language model evaluation
L Gao, J Tow, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, ...
GitHub, 2021
582*2021
MTEB: Massive text embedding benchmark
N Muennighoff, N Tazi, L Magne, N Reimers
EACL 2023, 2022
4142022
C-pack: Packaged resources to advance general chinese embedding
S Xiao, Z Liu, P Zhang, N Muennighoff
SIGIR 2024, 2023
2692023
SantaCoder: don't reach for the stars!
LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ...
ICLR 2023 DL4C Workshop, Best Paper Award, 2023
216*2023
Kto: Model alignment as prospect theoretic optimization
K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela
ICML 2024 Spotlight, 2024
2032024
SGPT: GPT sentence embeddings for semantic search
N Muennighoff
arXiv, 2022
1742022
Scaling Data-Constrained Language Models
N Muennighoff, AM Rush, B Barak, TL Scao, A Piktus, N Tazi, S Pyysalo, ...
NeurIPS 2023 Oral, Outstanding Paper Runner-Up Award, 2023
1732023
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
ACL 2024, Best Theme Paper Award, 2024
141*2024
Octopack: Instruction tuning code large language models
N Muennighoff, Q Liu, A Zebaze, Q Zheng, B Hui, TY Zhuo, S Singh, ...
ICLR 2024 Spotlight, NeurIPS 2023 Instruction Workshop, 2023
1362023
Starcoder 2 and the stack v2: The next generation
A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ...
arXiv, 2024
1082024
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL 2024, Best Resource Paper Award, 2024
107*2024
What Language Model to Train if You Have One Million GPU Hours?
TL Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, S Bideman, ...
EMNLP 2022 Findings, 2022
1002022
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ...
ACL 2024, Best Paper Award, 2024
762024
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
NEJLT 2023, 2021
722021
Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes
N Muennighoff
NeurIPS 2020 Competitions, 2020
682020
The hateful memes challenge: Competition report
D Kiela, H Firooz, A Mohan, V Goswami, A Singh, CA Fitzpatrick, P Bull, ...
NeurIPS 2020 Competitions, 2021
672021
系統目前無法執行作業,請稍後再試。
文章 1–20