追蹤
Takateru Yamakoshi
Takateru Yamakoshi
在 g.ecc.u-tokyo.ac.jp 的電子郵件地址已通過驗證 - 首頁
標題
引用次數
引用次數
年份
Reconstructing the cascade of language processing in the brain using the internal computations of a transformer-based language model
S Kumar, TR Sumers, T Yamakoshi, A Goldstein, U Hasson, KA Norman, ...
BioRxiv, 2022.06. 08.495348, 2022
362022
Investigating representations of verb bias in neural language models
RD Hawkins, T Yamakoshi, TL Griffiths, AE Goldberg
arXiv preprint arXiv:2010.02375, 2020
222020
Probing BERT's priors with serial reproduction chains
T Yamakoshi, TL Griffiths, RD Hawkins
arXiv preprint arXiv:2202.12226, 2022
122022
Causal interventions expose implicit situation models for commonsense language understanding
T Yamakoshi, JL McClelland, AE Goldberg, RD Hawkins
arXiv preprint arXiv:2306.03882, 2023
42023
Shared functional specialization in transformer-based language models and the human brain
S Kumar, TR Sumers, T Yamakoshi, A Goldstein, U Hasson, KA Norman, ...
2*
Neural Constructions
T Yamakoshi, R Hawkins
OSF, 2020
2020
Reconstructing the cascade of language processing in the brain using the internal computations of transformer language models
S Kumar, TR Sumers, T Yamakoshi, A Goldstein, U Hasson, KA Norman, ...
系統目前無法執行作業,請稍後再試。
文章 1–7