Children in their early school years have a relatively good under

Children in their early school years have a relatively good understanding of objects in the world and their labels, but are still learning to associate abstract word shapes with these familiar meanings. Embodiment theories of semantics (Barsalou, 2008, Fischer and Zwaan, 2008, Pulvermüller et al., 2005 and Simmons et al., 2008) suggest that word meaning is at least partially stored in distributed sensorimotor networks across the brain, and there is now substantial neuropsychological evidence supporting these theories in adults.

Therefore, to investigate how printed words become associated with word meaning as children learn to read, we investigated www.selleckchem.com/products/BIBW2992.html when and how printed word categories begin to engage the sensorimotor networks in the cortical areas activated by those categories. In proficiently reading adults, reading a word activates the same brain regions as viewing the picture or action described by that word. For example, written tool, animal and building names engage regions in the occipito-temporal and parietal cortices of the mature brain that are also activated by pictures of tools, animals and buildings (Boronat et al., 2005, Chao

et al., 1999, Devlin et al., 2005 and Shinkareva et al., 2011, but see Gerlach, 2007 and Tyler et al., 2003). In a seminal study, Pulvermüller et al. (2005) showed that Selleckchem Quizartinib stimulation of hand and leg areas of the left motor cortex using TMS, facilitates adults’ lexical decisions about printed arm- and leg-related words in a somatotopic manner (also see Buccino et al., 2005). Similarly, Lindemann, Stenneken, van Schie, and Bekkering (2006) showed that preparing an action involving the eyes or the mouth led to faster lexical decisions when subjects read the words “eye or “mouth” respectively. This demonstrates that sensorimotor

cortex activation in mature readers plays a role in extracting meaning from printed words. Sensorimotor activations can occur rapidly and automatically in response to printed words, even when attention is distracted (Hauk et al., 2008, Kiefer et al., 2008 and Shtyrov et al., 2004). They are also, however, modulated by task context (Hoenig et al., 2008 and Simmons et al., 2008). For example, BOLD responses in the adult brain are more pronounced during PAK6 tasks involving deliberate retrieval of category-specific object features than during tasks that do not, such as purely perceptual tasks (e.g., size discrimination), or name or function retrieval (Boronat et al., 2005, Devlin et al., 2005, Kellenbach et al., 2003, Noppeney et al., 2006 and Tomasino et al., 2007). Sato, Mengarelli, Riggio, Gallese, and Buccino (2008), found that reading hand-action verbs only interfered with manual button presses during an explicit semantic judgment task, and not during lexical decision-making.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>