Generalization from newly learned words reveals structural properties of the human reading system.

JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL(2017)

引用 7|浏览8
暂无评分
摘要
Connectionist accounts of quasiregular domains, such as spelling-sound correspondences in English, represent exception words (e.g., pint) amid regular words (e.g., mint) via a graded "warping" mechanism. Warping allows the model to extend the dominant pronunciation to nonwords (regularization) with minimal interference (spillover) from the exceptions. We tested for a behavioral marker of warping by investigating the degree to which participants generalized from newly learned made-up words, which ranged from sharing the dominant pronunciation (regulars), a subordinate pronunciation (ambiguous), or a previously nonexistent (exception) pronunciation. The new words were learned over 2 days, and generalization was assessed 48 hr later using nonword neighbors of the new words in a tempo naming task. The frequency of regularization (a measure of generalization) was directly related to degree of warping required to learn the pronunciation of the new word. Simulations using the Plaut, McClelland, Seidenberg, and Patterson (1996) model further support a warping interpretation. These findings highlight the need to develop theories of representation that are integrally tied to how those representations are learned and generalized. (PsycINFO Database Record
更多
查看译文
关键词
quasiregularity,connectionist models,word learning,tempo naming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要