How are you reading this? This is not a question of environment or media. Whether you are reading this on a tablet, laptop or smartphone makes no difference to how your brain is processing pixels into words with meanings.
As you are reading try to become aware of what is going on as you process pixels. How fast are you reading? Are you reading whole words or are there any, such as perspicacious, where you look again at the cluster of letters to work out the sounds, to get it clear in your mind?
Another question: ask yourself,
Can I hear my own voice saying the words in my head as I read this?
I’m willing to wager that you can hear your own voice. In fact, I’m willing to wager that you cannot read without hearing the words. Unless you have had a severe hearing impairment since birth it is unlikely that you are capable of reading without hearing.
Why is that? There is a short answer and a longer answer. Historians can provide us with the short answer – it’s because we’ve been speaking for far longer. Reading is a subordinate activity to speaking. Although some people argue that Homo Sapiens has always been able to ‘communicate using sounds’, which would date the capacity to speak as appearing around a million years ago, most historians and linguists put the age of syntactic speech at between forty and sixty thousand years.
And what about writing? You can argue that the process of putting pen to paper, or reed to rock, goes back 30 000 to the cave paintings of Southern Europe. But ‘true writing’, a system of coding utterances so that they can be reconstructed as speech by someone else first appeared around 5000 years ago in Mesopotamia, modern day Iraq. It also appeared independently in China and Mesoamerica, modern day Mexico.
On the shoulder you can see an example of ‘proto- writing’ – a step from ‘true writing’ where logographs are used to convey words. The next step is to translate the speech sounds, rather than words, to symbols. The earliest form of this is cuneiform, which was originally written by pressing reeds into wet clay to make small angular marks. In this example these are scratches in stone:
When symbols represent speech sounds, they can of course be used to code and reconstruct different languages. The example above shows writing in Acadian. Below you can see an early bilingual Acadian Sumerian dictionary, with the two languages both being written using the cuneiform script:
Cuneiform uses a syllabary: the symbols represent syllables: ba bo bi , ta to ti and so on. Some modern languages still use a syllabary, including Japanese, which uses two. The English language, of course, does not use syllabaries, it uses an alphabet. The next step towards what you are reading here can be seen in the Phoenician alphabet, where the symbols represent consonants, rather than syllables.
It’s easy for us to see the links between Phoenician characters and ours. What looks like a 4 in the middle of this piece of writing will be transformed into A – try looking sideways.
The addition of vowels is the last step to an alphabet. Even if you don’t read Ancient Greek, it’s hard not to pick out the name of the Greek god at the end of this inscription:
This brings us back to the present day, to here and now, to the fact that you are ‘hearing’ what you read. For this is what reading is: the reconstruction of scratches, squiggles or pixels into speech sounds, to form words that we understand through hearing them inside our head.
The other development in the ‘hear’ and now is that neuroscientists are able to demonstrate this quite clearly. The technology exists to look inside your head and see what parts of the brain are involved in the reading process. And what it shows is that whether you read aloud or silently you always activate the part of your brain that makes articulated language. It is if you are preparing to say the words that you are reading.
But this has been known for longer than functional MRI scanning has existed. The area of the brain that lights up at the front of the brain is known as Broca’s area, after the work of Paul Broca, the French scientist, active in the middle of the 19th century.
The other areas that are active are Wernicke’s area, which is involved in the decoding of language and the angular gyrus, which connects sounds to letters, as well as performing many other functions related to language, memory, numbers and so on.
The last area is at the bottom, known as the visual word form area. This area has been researched by many, including Stanilas Dehaene. This is an area that is believed to relate the form of words to their sound. What is fascinating is that this area is very active when reading in English, a language that cannot be ‘spoken as it is seen’. For those words who do not sound as they look, the shape and memory of the word is more important than the processing of the letters into sounds.
The area lights up L’area illumine
By contrast, in languages where you do pronounce words as you see them, such as Italian, the ‘sound to letter translation’ area is more active. This experiment was conducted by a team including Uta Frith. It, along with the results of other experiments, is described in her book , ‘The learning brain’.
So what consequences does this have for language teaching? In my last post, which sprang from an ELTChat summary, I looked at what Krashen has to say about the role of reading in second language learning. What, if anything, does this information about how the brain reads tell us about our learners and their reading? And what about the ‘reading aloud’ taboo?
Over the next few posts I plan to share some ways in which I’ve tried to adjust my approach to reading with different types of learners based on this information. If you’d like to comment or share any of your own experiences, I’d be more than happy to read and, of course, to ‘hear’them.