Putting The “Neuro” Back Into NLP

by Richard Bolstad

NLP Training in Japan - teaching Brain Regions

The Use Of Neurology

Increasingly, those of us working with human beings have come to terms with the fact that we are communicating with and through the human nervous system. Of course, what happens between human beings is not able to be reduced to neurology, any more than the beauty of a Rembrandt painting can be reduced to the chemistry of oil paints. However, if we want to paint like Rembrandt, a knowledge of that chemistry can be crucial. If we want to understand human communication, a knowledge of how the brain functions (neurology) will be similarly crucial. This is the starting point of the discipline called Neuro Linguistic Programming.

It was also the starting point for most of western psychotherapy. Sigmund Freud’s declared aim was “to furnish a psychology that shall be a natural science: that is to represent psychical processes as quantitatively determined states of specifiable material particles, thus making these processes perspicuous and free from contradiction.” (Freud, 1966).

Everything we experience of the world comes to us through the neurological channels of our sensory systems. The greatest spiritual transcendence and the most tender interpersonal moments are “experienced” (transformed into internal experiences) as images (visual), sounds (auditory), body sensations (kinesthetic), tastes (gustatory), smells (olfactory) and learned symbols such as these words (digital). Those experiences, furthermore, can be re-membered (put together again) by use of the same sensory information. Let’s take a simple example.

Think of a fresh lemon. Imagine one in front of you now, and feel what it feels like as you pick it up. Take a knife and cut a slice off the lemon, and hear the slight sound as the juice squirts out. Smell the lemon as you lift the slice to your mouth and take a bite of the slice. Taste the sharp taste of the fruit.

If you actually imagined doing that, you mouth is now salivating. Why? Because your brain followed your instructions and thought about, saw, heard, felt, smelled and tasted the lemon. By recalling sensory information, you recreated the entire experience of the lemon, so that your body responded to the lemon you created. Your brain treated the imaginary lemon as if it was real, and prepared saliva to digest it. Seeing, hearing, feeling, smelling and tasting are the natural “languages” of your brain. Each of them has a specialised area of the brain which processes that sense. Another NLP term for these senses is “Modalities”. When you use these modalities, you access the same neurological circuits that you use to experience a real lemon. As a result, your brain treats what you’re thinking about as “real”

Understanding this process immediately illuminates the way in which a number of psychotherapeutic problems occur. The person with Post Traumatic Stress Disorder uses the same process to recreate vivid and terrifying flashbacks to a traumatic event. And knowing how these brain circuits allow them to do that also shows us a number of ways to solve the problem.

Perception Is Not A Direct Process

Perception is a complex process by which we interact with the information delivered from our senses. Biochemist Graham Cairns Smith points out that there are areas of the neural cortex (outer brain) which specialise in information from each of the senses (he lists the modalities as olfactory, gustatory, somatosensory, auditory and visual). However there is no direct connection between the sense organ (the retina of the eyes, for example) and the specialised brain area which handles that sense. The cortex is the outer area of the brain, and each sense has an area of cortex specialised for it. The visual cortex, for example, is at the back of the brain. A great deal of redesigning has to happen at other places, before the raw sensory data gets to areas of the cortex where we can “perceive” it.

Consider the case of vision, for example. Impulses from the retina of the eye go first to the lateral geniculate body (see second diagram below), where they interact with data from a number of other brain systems. The results are then sent on to the visual cortex, where “seeing” is organised. Only 20% of the flow of information into the lateral geniculate body comes from the eyes. Most of the data that will be organised as seeing comes from areas such as the hypothalamus, a mid-brain centre which has a key role in the creation of emotion (Maturana and Varela, 1992, p 162). What we “see” is as much a result of the emotional state we are in as of what is in front of our eyes. In NLP terminology, this understanding is encapsulated in the statement “The map is not the territory”. The map your brain makes of the world is never the same as the real world.

Because the brain is a system with feedback loops, this process goes both ways. What we see is affected by our emotions, and it also shapes those emotions. Depression, anxiety, confusion, and anger are all related to certain “maps” of the world; certain types of perceptual distortion. So are joy, excitement, understanding and love. For example, the person who is depressed often actually takes their visual memories of the day’s experiences and darkens them, creating a gloomy world. Notice what that does. Take a memory of a recent experience you enjoyed, and imagine seeing it dull and grey. Usually, this doesn’t feel as good, so make sure you change it back to colour afterwards.

Colouring The World

To get a sense of how “creative” the perception of sensory information is, consider the example of colour vision. Tiny cells in the retina of the eye, called rods and cones, actually receive the first visual information from the outside world. There are three types of “cones”, each sensitive to light at particular places on the spectrum (the rainbow of colours we can see, ranging from violet through blue, green, yellow and orange to red). When a cone receives light from a part of the spectrum it is sensitive to, it sends a message to the brain. The cone does not know exactly which “colour” it just saw; it only knows whether the light was within its range. The first type of cone picks up light at wavelengths from violet to blue green, and is most sensitive to light that is violet. The second type picks up light from violet to yellow, and is most sensitive at green. The third type picks up light from violet to red, and is most sensitive to yellow. The most overlap in the sensitivity of these three types of cone happens in the middle colours (green-and yellow) and as a result these colours appear “brighter” than red and blue, when independent tests verify that they are not (Gordon, 1978, p 228).

If the brain only gets information from three overlapping types of cone, how does the brain tell which colour was “actually there”? The answer is that it makes an estimate. In a specific “colour” area of the visual cortex, the brain compares the results from several cones next to each other, taking a sample of the three different kinds, in order to guess which colour was actually present (Cairns-Smith, 1998, p 163-164). The colour scheme that we “see” is a very complex guess. In fact, youve probably noticed that colours seem to change when placed next to other colours. A blue that looks quite “pleasant” next to a green may look “strong” when seen next to a red, or vice versa. Placing a dark border around a colour makes it seem less “saturated” or pure (Gordon, 1978, p 228). Furthermore, what colours we see will also be affected by our emotional state. In everyday speech, we talk about “having a blue day” and about “seeing the world through rose tinted glasses”. Emotional information altering the perception of colour is actually fed into the visual system at the lateral geniculate body, as mentioned above.

The area of the visual cortex which makes final colour decisions is very precisely located. If this area of the brain is damaged in a stroke, then the person will suddenly see everything in black and white (acquired cerebral achromatopsia). At times a person will find that damage results in one side of their vision being coloured and one side being “black and white” (Sacks, 1995, p 152). This phenomenon was first reported in 1888, but between 1899 and 1974 there was no discussion of it in the medical literature. Medical researcher Oliver Sacks suggests that this resulted from a cultural discomfort with facts that showed how “manufactured” our vision is.

In 1957, Edwin Land, inventor of the Polaroid instant camera, produced a startling demonstration of the way our brain “makes up” colour schemes. He took a photo of a still life, using a yellow light filter. He then made a black and white transparency of this image. When he shone a yellow light through this transparency, viewers saw an image of the still life, showing only those areas that had emitted yellow light. Next he took a photo of the same still life, using an orange filter. Again he made a black and white transparency, and shone orange light through it. This time, viewers saw all the areas that had emitted orange light. Finally, Land turned on both transparencies at once, shining both yellow and orange light onto the screen. Viewers expected to see a picture in orange and yellow. But what they actually saw was full colour; reds, blues, greens, purples -every colour that was there in the original! The difference between the yellow and orange images had been enough to enable the viewers’ brains to calculate what colours might have been there in the “original scene”. The full colour experience was an illusion; but it is the same illusion that our brain performs at every moment (Sacks, 1995, p 156). That is to say, the colours you are seeing right now are not the colours out here in the world; they are the colours your brain makes up.

While we are on the subject of colour, it’s worth noting how fully our social and psychological experience shapes our colour vision. Dr T.F. Pettigrew and colleagues in South Africa were studying “dichoptic vision”. Their subjects had a mask on so they could be shown one picture to their right eye and one to their left eye. A picture of a white face was sent to one eye, and that of a black face to the other eye, at the same time. Both English speaking South Africans and “coloured” South Africans reported seeing a face. But Afrikaners tested could not see the face. They saw nothing! At a level deeper than the conscious mind, they could not fuse a black face and a white face (Pettigrew et alia, 1976).

neuro2.jpg

Modalities And Submodalities

Inside the visual cortex, there are several areas which process “qualities” such as colour. In NLP these qualities are known as visual “submodalities” (because they are produced in small sub-sections of the visual modality). Colour is one of the first fourteen visual submodalities listed by Richard Bandler (1985, p 24). The others are distance, depth, duration, clarity, contrast, scope, movement, speed, hue, transparency, aspect ratio, orientation, and foreground/background. Colour is also one of a list described by Psychology pioneer William James as early as 1890:

“The first group of the rather long series of queries related to the illumination, definition and colouring of the mental image, and were framed thus: Before addressing yourself to any of the questions on the opposite page, think of some definite object -suppose it is your breakfast table as you sat down to it this morning- and consider carefully the picture that rises before your mind’s eye.

1. Illumination.- Is the image dim or fairly clear? Is its brightness comparable to that of the actual scene?
2. Definition.- Are all the objects pretty well defined at the same time, or is the place of sharpest definition at any one moment more contracted than it is in a real scene?
3. Colouring.- Are the colours of the china, of the toast, bread-crust, mustard, meat, parsley, or whatever may have been on the table, quite distinct and natural?” (James, 1950, Volume 2, p51)

Since 1950, another such list has been constructed by research on the physiology of vision. Within the visual cortex, certain areas of cells are specialised to respond to specific visual structures. The function of such cells can be found in two ways. Firstly, in a rather inhumane way, their function can be identified by connecting an electrode to the cells in a monkey’s brain and finding out which visual objects result in those cells being activated. Secondly, the cells’ function can be identified by studying people who have accidentally suffered damage to them. When a group of such cells are damaged, a very specific visual problem results.

For example, there are cells which respond only to the submodality of motion. These cells were found in the prestriate visual cortex of monkeys’ brains in the early 1970s. When the monkey watched a moving object, the motion cells were activated as soon as movement began. In 1983, the first clinical cases were found of people with these specific cells damaged, resulting in central motion blindness (akinetopsia). A person with akinetopsia can see a car while it is still, but once the car moves, they see it disappear and reappear somewhere else. They see life as a series of still photos (Sacks, 1995, p 181).

Neurologically speaking, size, motion and colour are specialised functions, deserving of the name “submodalities”. Many other such functions have been neurologically identified, including brightness, orientation (the tilt of the picture), and binocular disparity (depth and distance).

The first research on the neurological basis of visual submodalities was done by David Hubel and Torsten Wiesel in the 1950s and 1960s (Kalat, 1988, p 191-194). They showed that even these core submodality distinctions are a learned result of interaction with the environment. We are not born able to discriminate colour, for example. If we lived in a world with no blues, it is possible that the ability to “see” blue would not develop. If this seems unbelievable, consider the following experiment on the submodality of orientation, done by Colin Blakemore and Grant Cooper (1970).

Newborn cats were brought up in an environment where they could only see horizontal lines. The area of the cortex which discriminates vertical lines simply did not develop in these cats, as demonstrated by checking with electrodes, and by the cats’ tendency to walk straight into chair legs. Similarly, cats raised where they could only see vertical lines were unable to see horizontal objects, and would walk straight into a horizontal bar. These inabilities were still present months later, suggesting that a critical phase for the development of those particular areas of the brain may have passed.

Higher Levels Of Analysis

The story of seeing is not yet complete with submodalities, however. From the visual cortex, messages go on to areas where even more complex meta-analysis occurs, in the temporal cortex and parietal cortex.

In the temporal cortex there are clusters of cells which respond only to images of a face, and other cells which respond only to images of a hand. In fact, there seem to be cells here which store 3-D images of these and other common shapes, so that those shapes can be “recognised” from any angle. Damage to these areas does not cause “blindness”, but it does cause an inability to recognise the objects presented (Kalat, 1988, p 196-197). There is a specific area which puts names to faces, and damage here means that, while a photo of the person’s partner may look familiar, the person is unable to name them. There is also an area of the temporal cortex which creates a sense of “familiarity” or “strangeness”. When a person is looking at a picture, and has the “familiarity” area stimulated, they will report that they have suddenly “understood” or reinterpreted the experience. When they have the “strangeness” area stimulated, they report that something puzzling has occurred to them about the image. If you then explain to them “rationally” that the object is no more or less familiar than it was, they will argue for their new way of experiencing it. They will tell you that it really has changed! It feels changed! It looks different.

The analysis done in the parietal cortex is even more curious. This area seems to decide whether what is seen is worth paying conscious attention to. For example, there are cells here which assess whether an apparent movement in the visual image is a result of the eyes themselves moving, or a result of the object moving. If it decides that the “movement” was just a result of your eyes moving, it ignores the movement (like the electronic image stabiliser on a video camera). Occasionally, this malfunctions; most people have had the experience of scanning their eyes quickly across a still scene and then wondering if something moved or if it was just their own eye scanning.

Interestingly, if one of these meta-analysis areas is stimulated electronically, the person will report that there have been changes in their basic submodalities. Researchers have found that if they stimulate the “familiarity” area, not only do people report that they get the feeling of familiarity, but they also see objects coming nearer or receding and other changes in the basic level submodalities (Cairns-Smith, p 168).

This relationship between submodalities and the “feeling” of an experience is the basis of some important NLP processes, called submodality shifts. If we ask someone to deliberately alter the submodalities of something they are thinking about, for example by moving the imagined picture away from them and brightening it up, they may suddenly get the “feeling” that their response to that thing has changed. And in fact, it will have changed. Remember the lemon I had you imagine at the start of this chapter. As you smell the juiciness of it, imagine it bigger and brighter and the smell getting stronger. Changing these submodalities changes your response right down to the body level.

Remembered and Constructed Images Use The Same Pathways As Current Images

So far, we have talked about research on how people “see” what is actually in front of their eyes. We have shown that raw data from the eyes is relayed through the lateral geniculate body (where it is combined with information from other brain centers including emotional centers), and through the occipital visual cortex (where the submodalities are created in specific areas). From here, messages go on to the temporal and parietal lobes where more complex analysis is done. One more key point explains how this comes to be so significant for personal change and psychotherapy.

Edoardo Bisiach (1978) is an Italian researcher who studied people with specific localised damage to a specific area of the posterior parietal cortex associated with “paying attention visually”. When this area of the cortex is damaged on one side, a very interesting result occurs. The person will fail to pay attention to objects seen on the affected side of their visual field. This becomes obvious if you ask them to describe all the objects in the room they are sitting in. If the affected side is the left, for example, when they look across the room, they will describe to you all objects on the right of the room, but ignore everything on the left. They will be able to confirm that those objects are there on the left, if asked about them, but will otherwise not report them (Kalat, 1988, p 197; Miller, 1995, p 33-34). Bisiach quickly discovered that this damage affected more than the person’s current perception. For example, he asked one patient to imagine the view of the Piazza del Duomo in Milan, a sight this man had seen every day for some years before his illness. Bisiach had him imagine standing on the Cathedral steps and got him to describe everything that could be seen looking down from there. The man described only one half of what could be seen, while insisting that his recollection was complete. Bisiach then had him imagine the view from the opposite side of the piazza. He then fluently reported the other half of the details.

The man’s image of this remembered scene clearly used the same neural pathways as were used when he looked out at Dr Bisiach sitting across the room. Because those pathways were damaged, his remembered images were altered in the same way as any current image. In the same way, the depressed person can be asked to remember an enjoyable event from a time before she or he was depressed. However, the visual memory of the events is run through the current state of the person’s brain, and is distorted just as their current experience is distorted.

The successful artist Jonathon I suffered damage to his colour processing areas at age 65. After this a field of flowers appeared to him as “an unappealing assortment of greys”. Worse, however, was his discovery that when he imagined or remembered flowers, these images were also only grey (Hoffman, 1998, p 108). If we change the functioning of the system for processing visual information, both current and remembered images will change.

Cross-referencing of Modalities

Submodalities occur neurologically in every sense. For example, different kinesthetic receptors and different brain processing occur for pain, temperature, pressure, balance, vibration, movement of the skin, and movement of the skin hairs (Kalat, 1988, p 154-157).

Even in what NLP has called the auditory digital sense modality (language), there are structures similar to submodalities. For example, the class of linguistic structures called presuppositions, conjunctions, helper verbs, quantifiers and tense and number endings (words such as “and”, “but”, “if”, “not”, “being”) are stored separately from nouns, which are stored separately from verbs. Broca’s aphasia (Kalat, 1988, p 134) is a condition where specific brain damage results in an ability to talk, but without the ability to use the first class of words (presuppositions etc). The person with this damage will be able to read “Two bee oar knot two bee” but unable to read the identical sounding “To be or not to be”. If the person speaks sign language, their ability to make hand signs for these words will be similarly impaired.

I have talked as if each modality could be considered on its own, separate from the other senses. The opposite is true. Changes in the visual submodalities are inseparable from changes in other modalities, and vice versa.

When we change a person’s experience in a visual submodality, submodalities in all the other senses are also changed. This process is known technically as “synesthesia”. Office workers in a room repainted blue will complain of the cold, even though the thermostat has not been touched. When the room is repainted yellow, they will believe it has warmed up, and will not complain even when the thermostat is actually set lower! (Podolsky, 1938). A very thorough review of such interrelationships was made by NLP developer David Gordon (1978, p 213-261). These cross-modality responses are neurologically based, and not simply a result of conscious belief patterns. Sounds of about 80 decibels produce a 37% decrease in stomach contractions, without any belief that this will happen – a response similar to the result of “fear”, and likely to be perceived as such, as the writers of scores for thriller movies know (Smith and Laird, 1930). These cross-modality changes generally occur out of conscious awareness and control, just as submodality shifts within a modality do.

Synesthesias as the Basis of Metaphor

In the brain, the area which controls the colour submodality area is right next to an area that represents visual (ie written) numbers (Ramachandran, 2004, p 65). Colour-number synesthesias are the most common of the “abnormal” synesthesias studied by neurologists. In these “disorders” the person uncontrollably and automatically sees each numeral as a different colour. The neurological closeness of the two areas in the brain obviously makes it easier for a person to connect these two very specific types of information. But synesthesia has wider implications. Ramachandran explains, “One of the odd facts about [abnormal] synesthesia, which has been known, and ignored, for a long time, is that it is seven times more common among artists, poets, novelists – in other words, flaky types!…. What artists, poets and novelists all have in common is their skill at forming metaphors, linking seemingly unrelated concepts in their brain, as when Macbeth said “Out, out brief candle,” talking about life.” (Ramachandran, 2004, p 71).

By identifying which synesthesias are most common, we can trace where in the brain the most common connections are occurring. This leads us to the angular gyrus, strategically located at the crossroads between the parietal lobe (kinaesthetic cortex), the temporal lobe (auditory cortex) and the occipital lobe (visual cortex). The angular gyrus and this junction have been getting progressively larger from simple mammals to monkeys and then to great apes. With the development of human beings the change is, Ramachandran says “an almost explosive development” (Ramachandran, 2004, p 74).

Imagine, says Vilayanur Ramachandran, Professor of Psychology and Neuroscience at the University of California, that you are looking at two simple patterns, one with spiked edges and one with rounded edges. You are told these two figures are letters in the Martian alphabet. One of them is called Kiki, and one is called Booba. Which is which?

You guessed it! Between 95% and 98% of respondents, whatever their native language, say that the rounded figure is Booba and the spiked figure is Kiki (Ramachandran, 2004, p 73). If the Martian story were true, it would tell us that something is very similar between Martian brains and all human brains. We already know of the phenomena behind this result in NLP. The correlation between an image and a sound is what we call a synesthesia. The specific qualities of the images that evoke the correlation with different sounds are what we call submodalities.

In another (earth-based) study, a list of several dozen words from a South American tribal language are stated to English speakers (for whom the words are incomprehensible). Half the words are names of species of fish; half are names of species of birds. The English speakers always tend to correctly categorise the words as fish or birds, well above the level we’d expect from statistical chance (Berlin, 1994).

Synesthesia is a very specific example of what cognitive theorists Fauconnier and Turner (2002) call blending. In the example above, you immediately blended the visual quality of roundedness with the auditory sound of “Booba”. That blend has the isomorphic structure of a simple metaphor, but the choice of characteristics to blend is not accidental. It happens not by chance and not by conscious design, but as a result of specific structuring in the brain. Knowing the neurological structure of these events may help us design better metaphors. Our language reflects this type of specific structuring already, which is why we can easily speak of “soft music” (using a kinesthetic metaphor to describe a sound) but cannot so easily speak of a “loud texture” (using an auditory metaphor to describe a kinesthetic sensation).

Ramachandran explains “We have tried the booba/kiki experiment on patients who have a very small lesion in the angular gyrus of the left hemisphere. Unlike you and me, they make random shape-sound associations.” He has also tested a small number of these patients on their ability to understand metaphorical statements and found it equally absent. One, for example “got fourteen out of fifteen proverbs wrong – usually interpreting them literally rather than metaphorically.” (Ramachandran, 2004, p 140). The temporal-parietal-occipital junction and the angular gyrus are the source of this ability to create abstract concepts by combining different inputs; an ability which creates art, poetry and metaphor. And here we have the missing link between the most abstract NLP processes (metaphor) and the most intricate (submodality shifts). Metaphor and submodality shifts occur through the same precise area of brain tissue, and have a similar neurological structure.

Natural Submodality Shifts

In the years 2007-2010, new research began to reveal ever more ways in which our perception is shaped by our internal state, goals, and beliefs. Psychologists Dennis Proffitt and Jessica Witt demonstrated that when people are carrying a heavy backpack or have just done some running, their estimate of the slope of a hill they are about to go up increases by more than a third. Similarly, fearful subjects on a skateboard at the top of a slope estimate the steepness of the hill as much more dangerous than subjects standing on a stable box. They then showed that after successfully kicking goals, footballers estimate the goal as wider, whereas after failing they estimated it as narrower. Successful baseball hitters estimate the ball as larger and unsuccessful players estimate the ball as smaller (Witt and Proffitt, 2007, 2008).

So far, this research suggests that we adjust the submodalities of what we see externally, based on our experiences, thereby explaining those experiences to ourselves. At New York University, psychologists Emily Balcetis and David Dunning asked subjects to estimate how far away a bottle of water was. Those who were thirsty estimated it to be much closer than those who were not. In this case, wanting the water made it seem closer. People tossing a bean bag at a $25 gift card thought it was closer than it was, and consequently they threw the bean bag an average of nine inches (23cm) less than the real length. Those throwing a bean bag at a gift card worth nothing actually overshot the card by an average of an inch (2 cm) (Baceltis and Dunning, 2010). In this case, wanting something more makes it seem closer, regardless of whether you have had any practice throwing the bean bag before. This confirms the principle behind the NLP “Visual Swish”, which is that closeness and desirability are strongly associated in our internal experience.

Of course, NLP assumes that this happens not only with actual objects, but also with imagined objects. Stanford University psychologists Hal Ersner-Hershfield and Brian Knutson asked people to imagine their future self, and to consider how similar it seemed visually to their present self. Both images are imaginary images (their present and future self). The researchers discovered that those who imagined their future self as looking visually more similar to their present self had previously saved up more money and assets for their retirement. As they asked people to think of each self, they scanned their brain using functional MRI. For some time brain researchers have known that when a person thinks of themselves, a precise area called the rostral anterior cingulated cortex becomes more active. When thinking about other people, other related brain areas become active instead. The researchers observed people thinking about their future self and were able to predict which subjects would be willing to put off rewards for the future, by noting whether the “self” area of the brain was activated or another area. That is, when people think of their future self as being the same as their present self, the same area of the brain is activated and they feel more at ease “giving money to that future self” by saving.

In these cases, imagining that the future self is similar in appearance to your imagined present self is correlated with a change in level of motivation. The scientists suggested that a similar change in the visual image could be used to deliberately produce a more motivated state of mind to encourage future-oriented action (Ersner-Hershfield et alia, 2009). By creating an image of a future self that is closer and more like your current self, you could increase your motivation to do things for your future benefit.

Research is increasingly revealing that what we imagine affects how our brain actually perceives the world outside. Psychologists Joel Pearson, Colin Clifford and Frank Tong at the University of Vanderbilt showed that having subjects imagine a certain pattern of striped lines resulted in them only seeing that pattern in a real life image which would usually be recognized as having two conflicting sets of lines. Their imagination shaped what they saw, just as NLP practitioners would have predicted (Pearson, Clifford and Tong, 2008).

Sensory Accessing and Representational Cues

As a person goes through their daily activities, information is processed in all the sensory modalities, continuously. However, the person’s conscious attention tends to be on one modality at a time. It is clear that some people have a strong preference for “thinking” (to use the term generically) in one sensory modality or another.

As early as 1890, the founder of Psychology, William James defined four key types of “imagination” based on this fact. He says “In some individuals the habitual “thought stuff”, if one may so call it, is visual; in others it is auditory, articulatory [to use an NLP term, auditory digital], or motor [kinesthetic, in NLP terms]; in most, perhaps, it is evenly mixed. The auditory type… appears to be rarer than the visual. Person’s of this type imagine what they think of in the language of sound. In order to remember a lesson they impress upon their mind, not the look of the page, but the sound of the words…. The motor type remains -perhaps the most interesting of all, and certainly the one of which least is known. Persons who belong to this type make use, in memory, reasoning, and all their intellectual operations, of images derived from movement…. There are persons who remember a drawing better when they have followed its outlines with their finger.” (James, 1950, Volume 2, p58-61)

Research identifying the neurological bases for these different types of “thought” began to emerge in the mid twentieth century. Much of it was based on the discovery that damage to specific areas of the brain caused specific sensory problems. A. Luria identified the separate areas associated with vision, hearing, sensory-motor activity, and speech (the latter isolated on the dominant hemisphere of the brain) as early as 1966.

By the time NLP emerged in the 1960s, then, researchers already understood that each sensory system had a specialised brain area, and that people had preferences for using particular sensory systems. In their original 1980 presentation of NLP, Dilts, Grinder, Bandler and DeLozier (1980, p 17) point out that all human experience can be coded as a combination of internal and external vision, audition, kinesthesis and olfaction/gustation. The combination of these senses at any time (VAKO/G) is called by them a 4-tuple. Kinesthetic external is referred to as tactile (somatosensory touch sensations) and kinesthetic internal as visceral (emotional and prioceptive).

The developers of NLP noticed that we also process information in words and that words too have a specific brain system specialised to process them, as if they were a sensory system. They described this verbal type of information as “auditory digital”, distinguishing it from the auditory input we get, for example, in listening to music or to the sound of the wind. In thinking in words (talking to ourselves) we pay attention specifically to the “meaning” coded into each specific word, rather than to the music of our voice. “The digital portions of our communications belong to a class of experience that we refer to as “secondary experience”. Secondary experience is composed of the representations that we use to code our primary experience -secondary experience (such as words and symbols) are only meaningful in terms of the primary sensory representations that they anchor for us.” (Dilts et alia, 1980, p 75). When we talk to you in words about “music” for example, what we say only has meaning depending on your ability to be triggered by the word music into seeing, hearing or feeling actual sensory representations of an experience of music.

Words (auditory digital) are therefore a meta-sensory system. Apart from words, there are other digital meta-representation systems. One is the visual “digital system” used by many scientists, by composers such as Mozart, and computer programmers. This system too has a specific area of the brain which manages it. (Bolstad and Hamblett, “Visual Digital”, 1999). In visual digital thinking, visual images or symbols take the place of words. Hence, Einstein says (quoted in Dilts, 1994-5, Volume II, p 48-49) “The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be “voluntarily” reproduced and combined.” Digital senses do not just meta-comment on “stable” primary representations of course. They actually alter those representations. By learning the word “foot” and the word “leg”, you actually perceive those areas of your body as visually and kinesthetically distinct units, for example. This distinction does not occur in the New Zealand Maori language, where the leg from the thigh down plus the foot is called the wae-wae, and is considered one unit.

Robert Dilts (1983, section 3, p 1-29) showed that different brain wave (EEG) patterns were associated with visual, auditory, tactile and visceral thought. The developers of NLP claimed to have identified a number of more easily observed cues which let us know which sensory system a person is using (or “accessing”) at any given time. Amongst these cues are a series of largely unconscious eye movements which people exhibit while thinking (1980, p 81). These “eye movement accessing cues” have become the most widely discussed of all the NLP discoveries. Outside of NLP, evidence that eye movements were correlated with the use of different areas of the brain emerged in the 1960s (amongst the earliest being the study by M. Day, 1964). William James referred to the fact that peoples eyes move up and back as they visualise. At one stage he quotes (Volume 2, p50) Fechner’s “Psychophysique”, 1860, Chapter XLIV. “In imagining, the attention feels as if drawn backwards towards the brain”.

The standard NLP diagram of accessing cues (below) shows that visual thinking draws the eyes up, auditory to the sides and kinesthetic down. Note that auditory digital is placed down on the left side (suggesting that all the accessing cues on that side may correspond to the dominant hemisphere, where verbal abilities are known to be processed). In left handed subjects, this eye pattern is reversed about 50% of the time.

Eye movements are clues as to the area in their brain from which a person is getting (accessing) information. A second aspect of thinking is which sensory modality they then “process” or re-present” that information in. Accessing and representing are not always done in the same sensory system. A person may look at a beautiful painting (Visual accessing) and think about how it feels to them (kinesthetic representation). The person’s representing of their experience in a particular language can be identified by the words (predicates) they use to describe their subject. For example, someone might say “I see what you mean.” visually, “I’ve tuned in to you.” auditorally, or “Now I grasp that.” kinesthetically. The person who looks at the beautiful painting and represents it to themselves kinesthetically might well say “That painting feels so warm. The colours just flow across it.” They experience the painting, in this case, as temperature and movement.

eye_cues_clearer.jpg

Research On The Eye Movement Phenomenon

Everything in the brain and nervous system works both ways. If place “A” affects place “B”, then place “B” affects place “A”. We saw previously that if changing submodalities affects whether you feel familiar looking at a picture, then changing the feeling of familiarity will also change the submodalities of your image.

In the same way, if thinking visually causes your eyes to be drawn up more, then placing the eyes up more will help you to visualise. Specifically, looking up to the left (for most people) will help them recall images they have seen before. Dr F. Loiselle at the University of Moncton in New Brunswick, Canada (1985) tested this. He selected 44 average spellers, as determined by their pretest on memorising nonsense words. Instructions in the experiment, where the 44 were required to memorise another set of nonsense words, were given on a computer screen. The 44 were divided into four subgroups for the experiment.

Group One were told to visualise each word in the test, while looking up to the left.
Group Two were told to visualise each word while looking down to the right.
Group Three were told to visualise each word (no reference to eye position).
Group Four were simply told to study the word in order to learn it.

The results on testing immediately after were that Group One (who did actually look up left more than the others, but took the same amount of time) increased their success in spelling by 25%, Group Two worsened their spelling by 15%, Group Three increased their success by 10%, and Group Four scored the same as previously. This strongly suggests that looking up left (Visual Recall in NLP terms) enhances the recall of words for spelling, and is twice as effective as simply teaching students to picture the words. Furthermore, looking down right (Kinesthetic in NLP terms) damages the ability to visualise the words. Interestingly, in a final test some time later (testing retention), the scores of Group One remained constant, while the scores of the control group, Group Four, plummeted a further 15%, a drop which was consistent with standard learning studies. The resultant difference in memory of the words for these two groups was 61% .

Thomas Malloy at the University of Utah Department of Psychology completed a study with three groups of spellers, again pretested to find average spellers. One group were taught the NLP “spelling strategy” of looking up and to the left into Visual Recall, one group were taught a strategy of sounding out by phonetics and auditory rules, and one were given no new information. In this study the tests involved actual words. Again, the visual recall spellers improved 25%, and had near 100% retention one week later. The group taught the auditory strategies improved 15% but this score dropped 5% in the following week. The control group showed no improvement.

These studies support the NLP Spelling Strategy specifically, and the NLP notion of Eye Accessing Cues, in general (reported more fully in Dilts and Epstein, 1995). There are many other uses to which we can put this knowledge. Counsellors are frequently aiming to have their clients access a particular area of the brain. For example, a counsellor may ask “How does it feel when you imagine doing that?”. Such an instruction will clearly be more effective if the person is asked to look down right before answering. The English phrase “it’s down right obvious” may have its origins in this kinesthetic feeling of certainty.

The claim that which sensory system you talk in makes a difference to your results with specific clients was tested by Michael Yapko. He worked with 30 graduate students in counselling, and had them listen to three separate taped trance inductions. Each induction used language from one of the main three sensory systems (visual, auditory and kinesthetic). Subjects were assessed before to identify their preference for words from these sensory systems. After each induction, their depth of trance was measured by electromyograph and by asking them how relaxed they felt. On both measures, subjects achieved greater relaxation when their preferred sensory system was used. (Yapko, 1981)

Strategies


To achieve any result, such as relaxation, each of us has a preferred sequence of sensory “representations” which we go through. For some people, imagining a beautiful scene is part of their most effective relaxation strategy. For others, the strategy that works best is to listen to soothing music, and for others simply to pay attention to their breathing slowing down as the feeling of comfort increases.

The concept of Strategies was defined in the book Neuro-Linguistic Programming Volume 1 (Dilts et alia, 1980, p 17). Here the developers of NLP say “The basic elements from which the patterns of human behaviour are formed are the perceptual systems through which the members of the species operate on their environment: vision (sight), audition (hearing), kinesthesis (body sensations) and olfaction/gustation (smell/taste). We postulate that all of our ongoing experience can usefully be coded as consisting of some combination of these sensory classes.” Thus, human experience is described in NLP as an ongoing sequence of internal representations in the sensory systems.

These senses were written in NLP notation as V (visual), A (auditory), K (kinesthetic), O (olfactory) and G (gustatory). To be more precise, the visual sense included visual recall, where I remember an image as I have seen it before through my eyes (Vr); visual construct, where I make up an image I’ve never seen before (Vc); and visual external, where I look out at something in the real world (Ve). So if I look up and see a blue sky, and then remember being at the beach, and then feel good, the notation would go: Ve > Vr > K. Notice that, at each step, I did have all my senses functioning (I could still feel my body while I looked up), but my attention shifted from sense to sense in a sequence. The digital senses (thinking in symbols such as words) have also been incorporated into this NLP strategy notation, so that we can describe one of the common strategies people use to create a state of depression as Ki > Ad > Ki > Ad . (Feel some uncomfortable body sensations; tell themselves they should feel better; check how they feel now, having told themselves off; tell themselves off for feeling that way, and repeat ad nauseum!)

The TOTE Model

The developers of NLP used the T.O.T.E. model to further explain how we sequence sensory representations. The “TOTE” was developed by neurology researchers George Miller, Eugene Galanter and Karl Pribram (1960), as a model to explain how complex behaviour occurred. Ivan Pavlov’s original studies had shown that simple behaviours can be produced by the stimulus-response cycle. When Pavlov’s dogs heard the tuning fork ring (a stimulus; or in NLP terms an “anchor”), they salivated (response). But there is more to dog behaviour than stimulus-response.

For example, if a dog sees an intruder at the gate of its section (stimulus/anchor), it may bark (response). However, it doesn’t go on barking forever. It actually checks to see if the intruder has run away. If the intruder has run away, the dog stops performing the barking operation and goes back to it’s kennel. If the intruder is still there, the dog may continue with that strategy, or move on to another response, such as biting the intruder. Miller, Gallanter and Pribram felt that this type of sequencing was inadequately explained in Pavlovs simple stimulus-response model. In Miller and Pribram’s model, the first stimulus, (seeing the intruder) is the Trigger (the first T in the “TOTE”; Pavlov called this the “stimulus”, and in NLP we also call this an “anchor”) for the dog’s “scaring-intruders-away” strategy. The barking itself is the Operation (O). Checking to see if the intruder is gone yet is the Test (second T). Going back to the kennel is the Exit from the strategy (E). This might be written as Ve > Ke > Ve/Vc > Ke. Notice that the checking stage (Test) is done by comparing the result of the operation (what the dog can see after barking) with the result that was desired (what the dog imagines seeing -a person running away). In the notation, comparison is written using the slash key “/”.

Lets take another example. When I hear some music on the radio that I really like (trigger or anchor), I reach over and turn up the radio (operation). Once it sounds as loud as I enjoy it sounding (test), I sit back and listen. The strategy, including the end piece where I listen (another whole strategy really) is Ae > Ke > Ae/Ar > Ke > Ae.

To revisit the strategy for depression mentioned above, we can now diagram it as Ki > Ad > Ki /Kc > Ad . The first KI is the trigger, stimulus or anchor which starts the strategy. The person feels a slightly uncomfortable feeling in their body. The next step, the Ad, is where they talk to themselves and tell themselves off for feeling that way. Next, they compare the feeling they get internally now (after telling themselves off) with the feeling they got before. (Ki /Kc). Noticing that it feels worse, they tell themselves off some more (the final exit Ad). The feeling of depression can be thought of as the result of repeatedly running this strategy, called “ruminating” by researchers into the problem (Seligman, 1997, p 82-83).

Once we understand that every result a person achieves is a result of a strategy which begins with some trigger and leads them to act and test that action, then we have a number of new choices for changing the way they run their strategy and the results they get. We will discuss these later in the book.

Meta-levels in Strategies

Miller, Gallanter and Pribram (1960) had recognised that the simple stimulus-response model of Pavlov could not account for the complexity of brain activity. Of course, neither can their more complex TOTE model. Any map is an inadequate description of the real territory. The TOTE model suggests that each action we take is a result of an orderly sequence A-B-C-D. In fact, as we go to run such a “strategy”, we also respond to that strategy with other strategies; to use another NLP term, we go “meta” (above or beyond) our original strategy.

The developers of NLP noted that “A meta response is defined as a response about the step before it, rather than a continuation or reversal of the representation. These responses are more abstracted and disassociated from the representations preceding them. Getting feelings about the image (feeling that something may have been left out of the picture, for instance) would constitute a meta response in our example.” (Dilts et alia, 1980, p 90). Michael Hall has pointed out that such responses could be more usefully diagrammed using a second dimension (Hall, 1995, p 57) for example:

Ki

Ve > Vc > Vc

This emphasises that the TOTE model is only a model. Real neurological processes are more network-like (O’Connor and Van der Horst, 1994). Connections are being continuously made across levels, adding “meaning” to experiences. The advantage of the TOTE model is merely that it enables us to discuss the thought process in such a way as to make sense of it and enable someone to change it.

States and Strategies

The NLP term “state”, is defined by O’Connor and Seymour (1990, p 232) as “How you feel, your mood. The sum total of all neurological and physical processes within an individual at any moment in time. The state we are in affects our capabilities and interpretation of experience”. Many new NLP Practitioners assume that an emotional state is a purely kinesthetic experience. A simple experiment demonstrates why this is not true. We can inject people with noradrenalin and their kinesthetic sensations will become aroused (their heart will beat faster etc). However, the emotional state they enter will vary depending on a number of other factors in their environment. They may, for example, become “angry”, “frightened” or “euphoric”. It depends on their other primary representations and on their meta-representations -what they tell themselves is happening, for example (Schachter and Singer, 1962). The same kinesthetics do not always result in the same state!

Robert Dilts suggests that a person’s state is a result of the interplay between the primary accessing, secondary representational systems, and other brain systems (1983, Section 1, p 60-69, Section 2, p 39-52, Section 3, p 12 and 49-51). Older theories assumed that this interplay must occur in a particular place in the brain; a sort of control centre for “states”. It was clear by the time of Dilts’ writing that this was not true. A state (such as a certain quality of happiness, curiosity or anxiety) is generated throughout the entire brain, and even removal of large areas of the brain will not stop the state being able to be regenerated. The state does involve a chemical basis (neuro-chemicals such as noradrenalin, mentioned above) and this specific chemical mix exists throughout the brain (and body) as we experience a particular state.

Ian Marshall (1989) provides an update of this idea based on the Quantum physics of what are called “Bose-Einstein condensates. The simplest way to understand this idea is to think of an ordinary electric light, which can light up your room, and a laser, which with the same amount of electricity can beam to the moon or burn through solid objects. The difference is that the individual light waves coming off a normal light are organised, in a laser, into a coherent beam. They all move at the same wavelength in the same direction. It seems that states in the brain are a result of a similar process: protein molecules all across the brain vibrate at the same speed and in the same way. This forms what is called a Bose-Einstein condensate (a whole area of tissue which behaves according to quantum principles; see Bolstad, 1996). This vibration results in a coherent state emerging out of the thousands of different impulses processed by the brain at any given time. Instead of being simultaneously aware that your knee needs scratching, the sun is a little bright, the word your friend just said is the same one your mother used to say, the air smells of cinnamon etc (like the electric light scattering everywhere), you become aware of a “state”. This “state” sort of summarises everything ready for one basic decision, instead of thousands.

States, as Dilts originally hypothesised, are still best considered as “meta” to the representational systems. They are vast, brain-wide commentaries on the entire set of representations and physiological responses present. Our states meta-comment on and alter the representations (from the primary senses as well as from the digital senses) “below them”. For example, when a person is angry, they may actually be physically unable to hear their partner or spouse telling them how much they love them. The interference from the state reduces the volume of the auditory external input. This often results in a completely different strategy being run! Put another way, the “state” determines which strategies we find easy to run and which we are unable to run well.

States That Regulate States

Psychotherapist Virginia Satir noted that times when a person feels sadness, frustration, fear and loneliness are fairly predictable consequences of being human. In most cases, what creates serious problems is not so much the fact that people enter such states. What creates disturbance is how people feel about feeling these states. Satir says “In other words, low self-worth has to do with what the individual communicates to himself about such feelings and the need to conceal rather than acknowledge them.” (Satir and Baldwin, 1983, p 195). The person with high self esteem may feel sad when someone dies, but they also feel acceptance and even esteem for their sadness. The person with low self esteem may feel afraid or ashamed of their sadness.

Such “states about states” are generated by accessing one neural network (eg the network generating the state of acceptance) and “applying it” to the functioning of another neural network (eg the network generating the state of sadness). The result is a neural network which involves the interaction of two previous networks. Dr Michael Hall calls the resulting combinations “meta-states” (Hall, 1995). Our ability to generate meta-states gives richness to our emotional life. Feeling hurt when someone doesn’t want to be with me is a primary level state that most people will experience at some time. If I feel angry about feeling hurt, then I create a meta-state (which we might call “aggrieved”). If I feel sad about feeling hurt, a completely different meta-state occurs (perhaps what we might call “self-pity”). If I feel compassionate about my hurt, the meta-state of “self-nurturing” may occur. Although in each case my initial emotional response is the same, the meta-state dramatically alters and determines the results for my life.

How Emotional States Affect The Brain

To understand the effect of emotional states in the brain, it will be useful for us to clarify exactly what happens when a strategy is run in the brain. Strategies are learned behaviours, triggered by some specific sensory representation (a stimulus). What does “learned” mean? The human brain itself is made up of about one hundred billion nerve cells or neurons. These cells organise themselves into networks to manage specific tasks. When any experience occurs in our life, new neural networks are laid down to record that event and its meaning. To create these networks, the neurons grow an array of new dendrites (connections to other neurons). Each neuron has up to 20,000 dendrites, connecting it simultaneously into perhaps hundreds of different neural networks.

Steven Rose (1992) gives an example from his research with new-hatched chicks. After eating silver beads with a bitter coating, the chicks learn to avoid such beads. One peck is enough to cause the learning. Rose demonstrated that the chicks’ brain cells change instantly, growing 60% more dendrites in the next 15 minutes. These new connections occur in very specific areas -what we might call the “bitter bead neural networks”. These neural networks now store an important new strategy. The strategy is triggered each time the chick sees an object the right shape and size to peck at. This is a visual strategy of course. The trigger (seeing a small round object) is Visual external (Ve) and the operation (checking the colour) is also Visual external (Ve). The chick then compares the colour of the object it has found with the colour of the horrible bitter beads from its visual recall (Ve/Vr) and based on that test either pecks the object or moves away from it (Ke). We would diagram this strategy: Ve > Ve > Ve/Vr > Ke.

Obviously, the more strategies we learn, the more neural networks will be set up in the brain. California researcher Dr Marion Diamond (1988) and her Illinois colleague Dr William Greenough (1992) have demonstrated that rats in “enriched” environments grow 25% more dendrite connections than usual, as they lay down hundreds of new strategies. Autopsy studies on humans confirm the process. Graduate students have 40% more dendrite connections than high school dropouts, and those students who challenged themselves more had even higher scores (Jacobs et alia, 1993).

How do messages get from one neuron to another in the brain? The transmission of impulses between neurons and dendrites occurs via hundreds of precise chemicals called “information substances”; substances such as dopamine, noradrenaline (norepinephrine), and acetylcholine. These chemical float from one cell to another, transmitting messages across the “synapse” or gap between them. Without these chemicals, the strategy stored in the neural network cannot run. These chemicals are also the basis for what we are calling an emotional state, and they infuse not just the nervous system but the entire body, altering every body system. A considerable amount of research suggests that strong emotional states are useful to learning new strategies. J. O’Keefe and L. Nadel found (Jensen, 1995, p 38) that emotions enhance the brain’s ability to make cognitive maps of (understand and organise) new information. Dr James McGaugh, psychobiologist at UC Irvine notes that even injecting rats with a blend of emotion related hormones such as enkephalin and adrenaline means that the rats remember longer and better (Jensen, 1995, p 33-34). He says “We think these chemicals are memory fixatives. They signal the brain, “This is important, keep this!” emotions can and do enhance retention.”

Neural Networks Are State Dependent

However there is another important effect of the emotional state on the strategies we run. The particular mixture of chemicals present when a neural network is laid down must be recreated for the neural network to be fully re-activated and for the strategy it holds to run as it originally did. If someone is angry, for example, when a particular new event happens, they have higher noradrenaline levels. Future events which result in higher noradrenaline levels will re-activate this neural network and the strategy they used then. As a result, the new event will be connected by dendrites to the previous one, and there will even be a tendency to confuse the new event with the previous one. If my childhood caregiver yelled at me and told me that I was stupid, I may have entered a state of fear, and stored that memory in a very important neural network. When someone else yells at me as an adult, if I access the same state of fear, I may feel as if I am re-experiencing the original event, and may even hear a voice telling me I’m stupid.

This is called “state dependent memory and learning” or SDML. Our memories and learnings, our strategies, are dependent on the state they are created in. “Neuronal networks may be defined in terms of the activation of specifically localised areas of neurons by information substances that reach them via diffusion through the extracellular fluid. In the simplest case, a 15-square mm neuronal network could be turned on or off by the presence or absence of a specific information substance. That is, the activity of this neuronal network would be “state-dependent” on the presence or absence of that information substance.” (Rossi and Cheek, 1988, p 57). Actually, all learning is state dependent, and examples of this phenomenon have been understood for a long time. When someone is drunk, their body is flooded with alcohol and its by-products. All experiences encoded at that time are encoded in a very different state to normal. If the difference is severe enough, they may not be able to access those memories at all until they get drunk again!

At times, the neural networks laid down in one experience or set of experiences can be quite “cut off” (due to their different neuro-chemical basis) from the rest of the person’s brain. New brain scanning techniques begin to give us more realistic images of how this actually looks. Psychiatrist Don Condie and neurobiologist Guochuan Tsai used a fMRI scanner to study the brain patterns of a woman with “multiple personality disorder”. In this disorder, the woman switched regularly between her normal personality and an alter ego called “Guardian”. The two personalities had separate memory systems and quite different strategies. The fMRI brain scan showed that each of these two personalities used different neural networks (different areas of the brain lit up when each personality emerged). If the woman only pretended to be a separate person, her brain continued to use her usual neural networks, but as soon as the “Guardian” actually took over her consciousness, it activated precise, different areas of the hippocampus and surrounding temporal cortex (brain areas associated with memory and emotion).(Adler, 1999, p 29-30)

Freud based much of his approach to therapy on the idea of “repression” and an internal struggle for control of memory and thinking strategies. This explanation of the existence of “unconscious” memories and motivations (“complexes”) can now be expanded by the state dependent memory hypothesis. No internal struggle is needed to account for any of the previously described phenomena. The “complex” (in Freudian terms) can be considered as simply a series of strategies being run from a neural network which is not activated by the person’s usual chemical states. Rossi and Cheek note “This leads to the provocative insight that the entire history of depth psychology and psychoanalysis now can be understood as a prolonged clinical investigation of how dissociated or state-dependent memories remain active at unconscious levels, giving rise to the “complexes” that are the source of psychological and psychosomatic problems.” (Rossi and Cheek, 1988, p 57).

Dr Lewis Baxter (1994) showed that clients with obsessive compulsive disorder have raised activity in certain specific neural networks in the caudate nucleus of the brain. He could identify these networks on PET scan, and show how, once the OCD was treated, these networks ceased to be active. Research on Post Traumatic Stress Disorder has also shown the state-dependent nature of its symptoms (van der Kolk et alia, 1996, p291-292). Sudden re-experiencing of a traumatic event (called a flashback) is one of the key problems in PTSD. Medications which stimulate body arousal (such as lactate, a by-product of physiological stress) will produce flashbacks in people with PTSD, but not in people without the problem (Rainey et alia, 1987; Southwick et alia, 1993). Other laboratory studies show that sensory stimuli which recreate some aspect of the original trauma (such as a sudden noise) will also cause full flashbacks in people with PTSD (van der Kolk, 1994). This phenomenon is Pavlov’s “classical conditioning”, also known in NLP as “anchoring”. State dependent learning is the biological process behind classical conditioning. The results of such classical conditioning can be bizarre. Mice who have been given electric shocks while in a small box will actually voluntarily return to that box when they experience a subsequent physical stress (Mitchell et alia, 1985). This is not a very nice experiment, but it does shed light on some of the more puzzling behaviours that humans with PTSD engage in.

People come to psychotherapists and counsellors to solve a variety of problems. Most of these are due to strategies which are run by state-dependent neural networks that are quite dramatically separated from the rest of the person’s brain. This means that the person has all the skills they need to solve their own problem, but those skills are kept in neural networks which are not able to connect with the networks from which their problems are run. The task of NLP change agents is often to transfer skills from functional networks (networks that do things the person is pleased with) to less functional networks (networks that do things they are not happy about).

Rapport: The Work of The Mirror Neurons

In 1995 a remarkable area of neurons was discovered by researchers working at the University of Palma in Italy (Rizzolatti et alia, 1996; Rizzolatti and Arbib, 1998). The cells, now called “mirror neurons”, are found in the pre-motor cortex of monkeys and apes as well as humans. In humans they form part of the specific area called Broca’s area, which is also involved in the creation of speech. Although the cells are related to motor activity (ie they are part of the system by which we make kinaesthetic responses such as moving an arm), they seem to be activated by visual input. When a monkey observes another monkey (or even a human) making a body movement, the mirror neurons light up. As they do, the monkey appears to involuntarily copy the same movement it has observed visually. Often this involuntary movement is inhibited by the brain (otherwise the poor monkey would be constantly copying every other monkey), but the resulting mimickery is clearly the source of the saying “monkey see, monkey do”.

In human subjects, when the brain is exposed to the magnetic field of transcranial magnetic stimulation (TMS), thus reducing conscious control, then merely showing a movie of a person picking up an object will cause the subject to involuntarily copy the exact action with their hand (Fadiga et alia, 1995). This ability to copy a fellow creatures actions as they do them has obviously been very important in the development of primate social intelligence. It enables us to identify with the person we are observing. When this area of the brain is damaged in a stroke, copying another’s actions becomes almost impossible. The development of speech has clearly been a result of this copying skill. Furthermore, there is increasing evidence that autism and Aspergers syndrome are related to unusual activity of the mirror neurons. This unusual activity results in a difficulty the autistic person has understanding the inner world of others, as well as a tendency to echo speech parrot-fashion and to randomly copy others’ movements (Williams et alia, 2001).

Mirror neurons respond to facial expressions as well, so that they enable the person to directly experience the emotions of those they observe. This results in what researchers call “emotional contagion” – what NLP calls rapport (Hatfield et alia, 1994).

How The Brain Experiences Oneness

The posterior superior (back upper) parts of the parietal cortex is called the Orientation Association Area or OAA (Newberg, D’Aquili and Vince, 2002, p 4). It analyses the entire visual image into two categories: self and other. When this area is damaged, the person has difficulty working out where they are in relation to what they see. Just trying to lie down on a bed becomes so complicated that the person will fall onto the floor. Like many brain structures, there is actually a left side OAA and a right side OAA. The left OAA creates the sensation of a physical body, and the right OAA creates a sense of an outside world in which that body moves.

Andrew Newberg and Gene D’Aquili have studied the OAA in both Tibetan Buddhist meditators and in Franciscan nuns (Newberg, D’Aquili and Vince, 2002, p 4-7). Newberg and D’Aquili used a SPECT (single photon emission computed tomography) camera to observe these people in normal awareness, and then at the times when they were at a peak of meditating or praying. At these peak moments, activity in the OAA ceased as the person’s brain stopped separating out their “self” from the “outside world” and simply experienced life as it is; as one undivided experience.

The Buddhist meditators would report, at this time, that they had a sense of timelessness and infinity, of being one with everything that is. The nuns tended to use slightly different language, saying that they were experiencing a closeness and at-oneness with God and a sense of great peace and contentment. The stilling of the sense of separate self creates an emotional state which is described variously as bliss, peace, contentment or ecstasy. Newberg and D’Aquili speculate that the same stilling of the OAA occurs in peak sexual experiences, and that earlier in human history this may have been the main source of such states of oneness (and may be its evolutionary “purpose” in the brain – Newberg, D’Aquili and Rause, 2002, p 126).

What we can be sure of from this experiment is that the human brain is designed to experience the profound states of oneness and the resulting bliss that spiritual teachers have reported throughout history. In fact, in some senses, this way of experiencing life is more fundamental to our brain than the categorisation of the world into “me” and “not me” which is happens in our ordinary awareness. The experience of oneness is also truer to the nature of the universe as revealed by quantum physics. Spiritual experience is as natural to us humans as seeing or talking. When the categorisation of sensory experience by the VCA and the OAA is stilled, the oneness of the universe is blissfully revealed. Newberg, D’Aquili and Rause say, this is “why God will not go away” in our history.

Where Awareness Lives In The Body

The nervous system is the most obvious hardware in which information (such as our memories) can be stored in the human body. This information is stored in electrical circuits, created by the formation of connections between the brain cells or neurons. The storage and electrical activation of these memories (what we might call thoughts and emotions) can be easily monitored by a machine called an electroencephalogram (ECG). To monitor these thought processes (“brain waves”), it is not necessary to drill a hole into the head. This is because brain waves, like all electricity, are a field phenomenon. They can be measured on the skin by electrodes, or even in the air around the person if a sensitive enough instrument is used. As you read these words, you respond in thoughts and emotions, and electrical impulses travel through your brain, creating an electrical field which can be easily measured outside the body. But this process does not limit itself to the brain.

As electricity flows through the brain in all the new connections you have created, it has to get from the edge of one neuron to receptors on the edge of the next. Messages are carried between the neurons and across the body at large on messenger molecules called polypeptides (chemicals such as adrenaline and the endorphins). These molecules are the chemical basis of our emotional states. Adrenaline, for example, is involved in emotional states such as fear and excitement. The endorphins are involved in states such as joy and love. When we hear about these messenger chemicals, we are prone to imagining hundreds of different chemicals, which need to be separately created by the body in order for you to feel a certain emotion. Researcher Candice Pert, the discoverer of the opiate-like endorphins, notes that while there are hundreds of messenger chemicals, they are all built from variants of a few molecules.

Pert explains: “All the evidence from our lab suggests that in fact there is actually only one type of molecule in the opiate receptors, one long polypeptide chain whose formula you can write. This molecule is quite capable of changing its conformation within its membrane so that it can assume a number of shapes. I note in passing that this interconversion can occur at a very rapid pace – so rapid that it is hard to tell whether it is one state or another at a given moment in time. In other words, receptors have both a wave-like and a particulate character, and it is important to note that information can be stored in the form of time spent in different states.” (Pert, 1986, p 14-15). As electricity reaches the end of a nerve cell, it is transferred to messenger molecules waiting there. These molecules change shape in response, sometimes fluctuating between one shape and another. The molecules then link on to the next nerve cell in the chain, or onto the white blood cells which form the immune system.

The Mobile Brain

One of Candace Pert’s most important discoveries is was that the white “blood” cells of the immune system have as many receptors for information chemicals as neurons do. These white cells move both through the bloodstream and directly through the body tissue, and act as a mobile brain. The messenger molecules change shape in response to minute electrical shifts, so that your whole body-wide emotional state can change instantly in response to shifts such as thoughts. Your immune response is similarly affected from moment to moment by your emotional state. This enables your body to respond to quantum level phenomena. As the immune cells transmit information about the quantum level changes to every part of the body, they enable you as an “observer” to almost magically affect every cell.

The Brain In The Abdomen

So far, we have talked as if the brain in the head is the central co-ordinator of thought and emotion in the body. In fact, it is only one of three key areas. The other two are the gut (abdomen) and the heart. When the nervous system is developing in a human embryo, the original “neural crest” divides into two sections. One remains in the head, and the other migrates down into the abdomen. Only later are the two systems connected by a two way highway called the Vagus nerve. The abdominal brain, which consists of two centres called the myenteric and the submucosal, has 100 million or so neurons -more than the spinal cord. It is a separate functional brain which plays an important role in emotional responses such as anxiety, and in the processing of information during sleep.

The electrical patterns in the gut can be monitored by a machine called an electrogastrogram (EG). Studies with this machine show that the rapid eye movement (REM) sleep, in which dreams occur in the brain, coincides with a time of rapid muscle movements and thinking in the gut, explaining why indigestion is associated with bad dreams (Tache, Wingate and Burks, 1994). New York professor of anatomy Michael Gershon has collated research showing that the brain in the gut learns separately from the brain in the head, and creates its own daily routines which often overide decisions made by the conscious mind (Gershon, 1998).

The Brain In The Heart

It is easy enough to understand that the huge area of neurons in the gut forms a separate centre for thinking and emotion. On the other hand, we do not usually think of the heart muscle as a potential thinking organ. However, anywhere that electricity is generated and stored in the body is potentially a storage organ for thought and emotion, as we shall see. The heart muscle uses an electrical system to operate, and this electrical change can be monitored by placing electrodes on the skin of the chest or even the arms. The measuring instrument in this case is called an electrocardiogram (ECG or EKG).

Much of our understanding of the heart as a memory storage organ comes from heart transplants. So far, neither the gut nor the brain can be transplanted. However, we would expect that if someone else’s brain was transplanted into your body, their “mind” would now exist in your body. The development of heart transplants over the last decades has afforded us a remarkable way of studying the role the heart plays in memory and emotion. Dr Paul Pearsall (1998), Dr Gary Schwartz and Dr Linda Russek (1999) are researchers in the new field of energy cardiology. They have found that almost all heart transplant recipients report experiencing memories and emotional responses which seem to come from the heart donor’s personality. Generally, these memories are minor. One young man kept saying to people after his transplant that “Everything is copacetic”. He then found out from the wife of his heart donor that “copacetic” was a code word she and her husband had used to reassure each other that they were okay. After her heart transplant, one young woman developed a craving for Kentucky Fried Chicken nuggets (which she had never eaten before). She then found out that the heart donor had died in a motorcycle accident with his pockets full of his favourite food (Chicken nuggets).

A psychiatrist recounted to Dr Paul Pearsall a story where the memories were more pervasive: “I have a patient, an eight-year-old little girl who received the heart of a murdered ten-year-old girl. Her mother brought her to me when she started screaming at night about her dreams of the man who had murdered her donor. She said her daughter knew who it was. After several sessions, I just could not deny the reality of what this child was telling me. Her mother and I finally decided to call the police and, using the descriptions from the little girl, they found the murderer. He was easily convicted with evidence my patient provided. The time, the weapon, the place, the clothes he wore, what the little girl he killed had said to him … everything the little heart transplant recipient reported was completely accurate.” (Pearsall, 1998, p 7).

The heart is not merely a pump. It is an organ which stores and processes emotions and memories. When traditional cultures talk about “gut feelings” and “heart-felt truths” they are not speaking metaphorically. They are referring to two of the key organs in which human awareness is seated. In China the three “brains” (the one in the head, the gut and the heart) are called Dan-tiens. They are the main centres for the energy or “chi” which is regulated and directed in Traditional Chinese medicine.

The Brain And NLP: A Summary

A number of the factors I have discussed in this article create choices for an NLP Practitioner wanting to help a client transfer functional skills to the neural networks where they are needed. To summarise what we have said about the brain with this in mind:

  • The brain responds to visual, auditory, kinesthetic, olfactory-gustatory and auditory digital (verbal) cues. Remember the lemon!
  • Each of these modalities is run by a particular area of the cortex (outer brain).
  • The sensory organs are only indirectly connected to the areas of the cortex that analyse their data. On the way, the deeper areas of the brain where emotion and memories are stored influence the results of perception.
  • Within each modality (sensory system) in the cortex, there are specific smaller areas which adjust the qualities of that sensory experience (the ‘submodalities’). These include such qualities as colour and distance, visually. When these submodalities change, the person’s “feeling state” about the experience will change.
  • Memories and imagined experiences are run through the same sensory areas of the brain as new experiences. The submodalities of our memories and our imaginings are altered by our emotional state as we think of those memories or imagine those possibilities.
  • All the outcomes people generate in their brain are the result of a series of internal sensory “representations”. In NLP such a series is called a strategy.
  • As people run through a strategy, and access information from the different modalities, there are a number of ways we can observe their thinking in these modalities. By watching their eye movements, we can see which area of the brain they are drawing information from. By listening to their words, we can hear which sensory system they are using to re-present the information to themselves.
  • Strategies can be thought of as having a trigger that starts them (also called an “anchor” in NLP), an operation where the person acts and collects information in some sense, a test where the person checks whether the results they got are the results they wanted, and an exit where they act based on this test. This sequence is known by the acronym TOTE.
  • In real life, strategies are not simple sequential operations. The brain is able to meta-respond to a strategy.
  • Each strategy is run by a neural network (a set of neurons connected by dendrites and supported by a chemical mix of neuro-transmitters).
  • This chemical mix which supports a specific neural network is a key ingredient of what we call an “emotional state”, which is a brain-wide experience.
  • When a neural network is dependent on a state which is very different to those usually occurring, then the person’s usual coping skills may not be available while that state is active.
  • Social skills including language use and empathy are dependent on the use of mirror neurons in the language area in the brain. Mirror neurons result in a tendency to involuntarily copy the gestures and facial expressions of others and to thus build an internal representation of their experience. This process is known in NLP as rapport.
  • Helping someone change involves helping them access or trigger useful neural networks (running useful strategies) at the times they need them (often times when, in the past, they were triggered into using unresourceful strategies). In the human body, consciousness particularly manifests itself in those areas which generate and store electrical energy; the heart, abdomen and brain.
  • An understanding of the brain assists us to make sense of how human beings experience spiritual states. The Orientation Association areas (OAA) divide sensory experience into two categories: self and non-self. When the OAA is relatively still, then we experience the universe in its undifferentiated unity. This undivided experience of the universe occurs in deep meditation and prayer, and is the core experience in the field of spirituality.

Richard Bolstad PO Box 35111, Browns Bay, New Zealand
E-mail: learn@transformations.org.nz

Bibliography

  • Adler, R. “Crowded Minds” in New Scientist, Vol. 164, No. 2217, p 26-31, December 18, 1999
  • Balcetis, E, & Dunning, D. “Wishful seeing: Desired objects are seen as closer” Psychological Science. January 2010 1;21(1): page 147-52, 2010
  • Bandler, R. Using Your Brain For A Change Real People Press, Moab, Utah, 1985
  • Baxter L. R. “Positron emission tomography studies of cerebral glucose metabolism in obsessive compulsive disorder.” Journal of Clinical Psychiatry, 1994, 55 Supplement: p 54-9.
  • Bisiach, E. and Luzzatti, C. “Unilateral Neglect of Representational Space” p 129-133 in Cortex, 14 (4), 1978
  • Blakemore, C. and Cooper, G. “Development of the Brain Depends on The Visual Environment” p 477-478 in Nature, 228, 1970
  • Bolstad, R. and Hamblett, M. “Visual Digital: Modality of the Future?” in NLP World, Vol 6, No. 1, March 1999
  • Bolstad, Richard, “NLP: The Quantum Leap” in NLP World, Vol 3, No. 2, July 1996, p5-34
  • Cairns-Smith, A.G. Evolving The Mind Cambridge University, Cambridge, 1998
  • Day, M. “An Eye Movement Phenomenon Relating to Attention, Thoughts, and Anxiety” in Perceptual Motor Skills, 1964
  • Diamond, M. Enriching Heredity: The Impact of the Environment on the Brain Free Press, New York, 1988
  • Dilts, R. Modelling With NLP Meta Publications, Capitola, California, 1998
  • Dilts, R. Roots Of Neuro-Linguistic Programming, Meta Publications, Cupertino, California, 1983
  • Dilts, R., Grinder, J., Bandler, R. and DeLozier, J. Neuro-Linguistic Programming: Volume 1 The Study of the Structure of Subjective Experience, Meta Publications, Cupertino, California, 1980
  • Dilts, R.B. and Epstein, T.A. Dynamic Learning, Meta Publications, Capitola, 1995
  • Dilts, R.B. Strategies of Genius, Volume I, II, and III, Meta Publications, Capitola, 1994-5
  • Ersner-Hershfield, H., Garton, M.T., Ballard, K., Samanez-Larkin, G.R., and Knutson, B. “Don’t stop thinking about tomorrow: Individual differences in future self-continuity account for saving.” Journal of Judgment & Decision Making. 2009 June 1;4(4): page 280-286. 2009
  • Fadiga, L., Fogassi, G., Pavesi, G. and Rizzolatti, G. “Motor Facilitation during action observation: a magnetic stimulation study” p 2608-2611 in Journal of Neurophysiology, No. 73, 1995
  • Fauconnier, G. and Turner, M. The Way We Think Basic Books, New York, 2002
  • Freud, S. “Project For A Scientific Psychology” in The Standard Edition Of The Complete Psychological Works Of Sigmund Freud Hogarth Press, London, 1966
  • Gershon, M. The Second Brain Harper Collins, New York, 1998
  • Gordon, D. Therapeutic Metaphors Meta, Cupertino, California, 1978
  • Greenough, W.T., Withers, G. and Anderson, B. “Experience-Dependent Synaptogenesis as a Plausible Memory Mechanism” p 209-229 in Gormezano, I. And Wasserman, E. ed Learning and Memory: The Behavioural and Biological Substrates Erlbaum & Associates, Hillsdale, New Jersey, 1992
  • Hall, M. “The New Domain of Meta-States in the History of NLP” p 53-60 in NLP World, Vol 2, No. 3, November 1995
  • Hatfield, E., Cacioppo, J. and Rapson, R. Emotional Contagion Cambridge University Press, Cambridge, 1994
  • Hoffman, D.D. Visual Intelligence W.W. Norton & Co., New York, 1998
  • James, W. The Principles Of Psychology (Volume 1 and 2), Dover, New York, 1950.
  • Jensen, E. Brain-Based Teaching And Learning Turning Point, Del Mar, California, 1995
  • Kalat, J.W. Biological Psychology Wadsworth Publishing, Belmont, California, 1988
  • Lakoff, G. and Johnson, M. Metaphors We Live By University of Chicago Press, Chicago, 1980
  • Luria, A.R. Higher Cortical Functions In Man, Basic Books, New York, 1966
  • Malloy, T.E. “Cognitive strategies and a classroom procedure for teaching spelling” Department of Psychology, University of Utah, 1989
  • Malloy, T.E., Mitchell, C. and Gordon, O.E. “Training cognitive strategies underlying intelligent problem solving” p 1039-1046 in Perceptual and Motor Skills, No 64, 1987
  • Marshall, I., “Consciousness and Bose-Einstein condensates” in New Ideas in Psychology, 7, 1989, p 73-83
  • Maturana, H.R. and Varela, F.J. The Tree Of Knowledge Shambhala, Boston, 1992
  • Mealey, L., Daood, C., & Krage, M. “Enhanced Memory for Faces Associated with Potential Threat.” p 119-128 in Ethology and Sociobiology, Vol. 17, No. 2, 1996
  • Miller, G., Galanter, E. and Pribram, K. Plans And The Structure Of Behaviour, Henry Holt & Co., 1960
  • Mitchell, D., Osbourne, E.W., and O’Boyle, M.W. “Habituation under stress: Shocked mice show non-associative learning in a T-maze” p 212-217 in Behavioural and Neural Biology, No 43, 1985
  • Newberg, A., D’Aquili, E. and Rause, V. Why God Won’t Go Away Ballantine, New York, 2002
  • O’Connor, J. and Seymour, J. Introducing Neuro-Linguistic Programming, Harper Collins, London, 1990
  • O’Connor, J. and Van der Horst, B. “Neural Networks and NLP Strategies: Part 2” p 30-38 in Anchor Point Vol 8, No. 6, June 1994
  • Pearsall, P. The Heart’s Code Broadway, New York, 1998
  • Pearson, J. Clifford, C.W.G. and Tong, F. “The Functional Impact of Mental Imagery on Conscious Perception” Current Biology, 18, July 8, 2008, page 982-986
  • Pert, C.B. “The Wisdom of the Receptors: Neuropeptides, The Emotions and Bodymind” in Advances, 3(3), p 8-16, 1986
  • Pert, C.B. Molecules of Emotion Simon & Schuster, New York, 1999
  • Pettigrew, T.F. et alia, “Unconscious Interpretation Precedes Seeing” in Brain/Mind Bulletin, March 15,1976
  • Podolsky, E. The Doctor Prescribes Colours National Library Press, New York, 1938
  • Rainey, J.M., Aleem, A., Ortiz, A., Yaragani, V., Pohl, R. and Berchow, R. “Laboratory procedure for the inducement of flashbacks” p 1317-1319 in American Journal of Psychiatry, No. 144, 1987
  • Ramachandran, V.S. A Brief Tour of Human Consciousness Pearson Education, New York, 2004
  • Rizzolatti, G., Fadiga, L., Gallese, V. and Fogassi, L. “Premotor cortex and the recognition of motor actions” p 131-141 in Cognitive Brain Research, No. 3, 1996
  • Rizzolatti,G. and Arbib, M.A. “Language within our grasp” p 188-194 in Trends in Neuroscience, No. 21, 1998
  • Rose, S. The Making of Memory, Bantam, New York, 1992.
  • Rossi, E.L. and Cheek, D.B. Mind-Body Therapy, W.W. Norton & Company, New York, 1988
  • Rossi, E.L. The Psychobiology Of Mind-Body Healing, W.W. Norton & Company, New York, 1986
  • Rossi, E.L. The Symptom Path To Enlightenment, Palisades Gateway Publishing, Pacific Palisades, California, 1996
  • Sacks, O. “Scotoma: Forgetting and Neglect in Science” in Silvers, R. ed Hidden Histories of Science Granta, London, 1995
  • Satir, V. and Baldwin, M. Satir Step By Step, Science and Behaviour, Palo Alto, california, 1983
  • Schwartz, G.E.R. and Russek, L.G.S. The Living Energy Universe Hampton, Charlottesville, Virginia, 1999
  • Seligman, M.E.P. Learned Optimism, Random House, Sydney, 1997
  • Smith, E.L. and Laird, D.A., “The Loudness of Auditory Stimuli Which Affect Stomach Contractions In Healthy Human Beings” in Journal of the Acoustic Society of America, 2, p94-98, 1930
  • Southwick, S.M., Krystal, J.H., Morgan, A., Johnson, D., Nagy, L., Nicolaou, A., Henninger, G.R., and Charney, D.S. “Abnormal noradrenergic function in posttraumatic stress disorder” p 266-274 in Archives of General Psychiatry, No. 50, 1993
  • Tache, Y., Wingate, D.L., and Burks, T.F. Innervation of the Gut CRC Press, Boca Raton, Florida, 1994
  • Van der Kolk, B.A., McFarlane, A.C. and Weisaeth, L. eds Traumatic Stress Guilford, New York, 1996
  • Williams, J.H.G., Whiten, A., Suddendorf, T. and Perrett,D.I. “Imitation, mirror neurons and autism” p 287-295 in Neuroscience and Biobehavioural Review, No 25, 2001
  • Witt, J.K., & Proffitt, D.R. “Perceived slant: A dissociation between perception and action.” Perception, 36, page 249-257. 2007
  • Witt, J.K., Linkenauger, S.A., Bakdash, J.Z., Proffitt, D.R. “Putting to a bigger hole: Golf performance relates to perceived size.” Psychonomic Bulletin and Review, 15(3). 2008
  • Yapko. M., “The Effects of Matching Primary Representational System Predicates on Hypnotic Relaxation.” in the American Journal of Clinical Hypnosis, 23, p169-175, 1981