追蹤
Jeffrey Wu
Jeffrey Wu
OpenAI
在 openai.com 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Language models are few-shot learners
TB Brown
arXiv preprint arXiv:2005.14165, 2020
355522020
Language models are unsupervised multitask learners
A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever
OpenAI blog 1 (8), 9, 2019
24457*2019
Training language models to follow instructions with human feedback
L Ouyang, J Wu, X Jiang, D Almeida, C Wainwright, P Mishkin, C Zhang, ...
Advances in neural information processing systems 35, 27730-27744, 2022
103972022
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
6558*2023
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
23892020
Generative pretraining from pixels
M Chen, A Radford, R Child, J Wu, H Jun, D Luan, I Sutskever
International conference on machine learning, 1691-1703, 2020
17412020
Learning to summarize with human feedback
N Stiennon, L Ouyang, J Wu, D Ziegler, R Lowe, C Voss, A Radford, ...
Advances in Neural Information Processing Systems 33, 3008-3021, 2020
16672020
Fine-tuning language models from human preferences
DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ...
arXiv preprint arXiv:1909.08593, 2019
13452019
Webgpt: Browser-assisted question-answering with human feedback
R Nakano, J Hilton, S Balaji, J Wu, L Ouyang, C Kim, C Hesse, S Jain, ...
arXiv preprint arXiv:2112.09332, 2021
10572021
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
5922019
Open x-embodiment: Robotic learning datasets and rt-x models
A O'Neill, A Rehman, A Gupta, A Maddukuri, A Gupta, A Padalkar, A Lee, ...
arXiv preprint arXiv:2310.08864, 2023
2672023
Recursively summarizing books with human feedback
J Wu, L Ouyang, DM Ziegler, N Stiennon, R Lowe, J Leike, P Christiano
arXiv preprint arXiv:2109.10862, 2021
2502021
Language models can explain neurons in language models
S Bills, N Cammarata, D Mossing, H Tillman, L Gao, G Goh, I Sutskever, ...
URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper …, 2023
1982023
Self-critiquing models for assisting human evaluators
W Saunders, C Yeh, J Wu, S Bills, L Ouyang, J Ward, J Leike
arXiv preprint arXiv:2206.05802, 2022
1982022
Weak-to-strong generalization: Eliciting strong capabilities with weak supervision
C Burns, P Izmailov, JH Kirchner, B Baker, L Gao, L Aschenbrenner, ...
arXiv preprint arXiv:2312.09390, 2023
1582023
Scaling and evaluating sparse autoencoders
L Gao, TD la Tour, H Tillman, G Goh, R Troll, A Radford, I Sutskever, ...
arXiv preprint arXiv:2406.04093, 2024
352024
FMB: A functional manipulation benchmark for generalizable robotic learning
J Luo, C Xu, F Liu, L Tan, Z Lin, J Wu, P Abbeel, S Levine
The International Journal of Robotics Research, 02783649241276017, 2023
162023
Action-quantized offline reinforcement learning for robotic skill learning
J Luo, P Dong, J Wu, A Kumar, X Geng, S Levine
Conference on Robot Learning, 1348-1361, 2023
142023
Sufficient statistics for team decision problems
J Wu
Stanford University, 2013
112013
A theory of sufficient statistics for teams
J Wu, S Lall
53rd IEEE conference on decision and control, 2628-2635, 2014
102014
系統目前無法執行作業,請稍後再試。
文章 1–20