Follow
Niladri S. Chatterji
Niladri S. Chatterji
Postdoctoral Researcher, Department of Computer Science, Stanford University
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
47992021
The llama 3 herd of models
A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ...
arXiv preprint arXiv:2407.21783, 2024
22752024
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
11882022
Underdamped Langevin MCMC: A non-asymptotic analysis
X Cheng, NS Chatterji, PL Bartlett, MI Jordan
Conference on Learning Theory 75, 300--323, 2018
3622018
Convergence rates for Langevin Monte Carlo in the nonconvex setting
X Cheng, NS Chatterji, Y Abbasi-Yadkori, PL Bartlett, MI Jordan
arXiv preprint arXiv:1805.01648, 2018
192*2018
Is there an analog of Nesterov acceleration for gradient-based MCMC?
YA Ma, NS Chatterji, X Cheng, N Flammarion, PL Bartlett, MI Jordan
Bernoulli 27 (3), 1942--1992, 2021
1772021
Finite-sample analysis of interpolating linear classifiers in the overparameterized regime
NS Chatterji, PM Long
Journal of Machine Learning Reseach 22 (129), 1--30, 2021
1422021
On the theory of variance reduction for stochastic gradient Monte Carlo
NS Chatterji, N Flammarion, YA Ma, PL Bartlett, MI Jordan
International Conference on Machine Learning 80, 764--773, 2018
1082018
Proving test set contamination in black box language models
Y Oren, N Meister, N Chatterji, F Ladhak, TB Hashimoto
arXiv preprint arXiv:2310.17623, 2023
1042023
Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data
S Frei, NS Chatterji, PL Bartlett
Conference on Learning Theory 178, 2668--2703, 2022
972022
& Liang, P.(2021). On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 0
89
On the opportunities and risks of foundation models (arXiv: 2108.07258). arXiv
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
882022
On the opportunities and risks of foundation models (2021)
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2022
742022
The llama 3 herd of models
A Grattafiori, A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, ...
arXiv e-prints, arXiv: 2407.21783, 2024
692024
The intriguing role of module criticality in the generalization of deep networks
NS Chatterji, B Neyshabur, H Sedghi
International Conference on Learning Representations, 2020
692020
Langevin Monte Carlo without smoothness
NS Chatterji, J Diakonikolas, MI Jordan, PL Bartlett
International Conference on Artificial Intelligence and Statistics 108, 1716 …, 2020
532020
OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits
NS Chatterji, V Muthukumar, PL Bartlett
International Conference on Artificial Intelligence and Statistics 108, 1844 …, 2020
452020
Alternating minimization for dictionary learning with random initialization
NS Chatterji, PL Bartlett
Advances in Neural Information Processing Systems 30, 2017
43*2017
Random feature amplification: Feature learning and generalization in neural networks
S Frei, NS Chatterji, PL Bartlett
arXiv preprint arXiv:2202.07626, 2022
332022
On the theory of reinforcement learning with once-per-episode feedback
NS Chatterji, A Pacchiano, PL Bartlett, MI Jordan
Advances in Neural Information Processing Systems 34, 3401--3412, 2021
332021
The system can't perform the operation now. Try again later.
Articles 1–20