追蹤
Ananya Harsh Jha
Ananya Harsh Jha
Allen Institute for AI
在 allenai.org 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Disentangling factors of variation with cycle-consistent variational auto-encoders
AH Jha, S Anand, M Singh, VSR Veeravasarapu
Proceedings of the European Conference on Computer Vision (ECCV), 805-820, 2018
1462018
TorchMetrics - Measuring Reproducibility in PyTorch
N Detlefsen, J Borovec, J Schock, A Jha, T Koker, L Di Liello
86*2022
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
222024
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv preprint arXiv:2402.00159, 2024
152024
AASAE: Augmentation-Augmented Stochastic Autoencoders
W Falcon, AH Jha, T Koker, K Cho
arXiv preprint arXiv:2107.12329, 2021
6*2021
Large Language Model Distillation Doesn't Need a Teacher
AH Jha, D Groeneveld, E Strubell, I Beltagy
arXiv preprint arXiv:2305.14864, 2023
32023
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
arXiv preprint arXiv:2312.10523, 2023
12023
Robust Tooling and New Resources for Large Language Model Evaluation via Catwalk
K Richardson, I Magnusson, O Tafjord, A Bhagia, I Beltagy, A Cohan, ...
系統目前無法執行作業,請稍後再試。
文章 1–8