Visual Digital: Modality Of The Future?

© Richard Bolstad and Margot Hamblett, 1998

Intelligence: The Next Leap

For some years now, NLP Practitioners have been teaching and making use of  what we believe is an entirely new sensory system; a system we call Visual Digital. This system is central to “scientific” thinking, and we believe it is an “emergent” system (new to human beings). We will put this system in a context of the history of thinking about different sensory/perceptual/intelligence systems, both inside and outside of the field of NLP. In doing so, we will show how Visual Digital explains some puzzling connections, and fills in gaps in our thinking. We will then suggest how you can continue the process of confirming the existence of this important new modality.

A. The Development of the Concept of Sensory Systems

Sensory Systems: 4-Tuples and More!

In 1890, psychologist William James had already defined four key types of  “thought”. He said (1950, Volume 2, p58) “In some individuals the habitual “thought stuff”, if one may so call it, is visual; in others it is auditory, articulatory [verbal], or motor [kinesthetic]; in most, perhaps, it is evenly mixed.”  Research identifying the neurological bases for these different types of “thought” began to emerge in the mid twentieth century. Much of it was based on the discovery that damage to specific areas of the brain caused specific sensory problems. A. Luria (1966) identified separate areas associated with vision, hearing, sensory-motor activity, and speech (the latter isolated on the dominant hemisphere of the brain). Evidence that eye movements were correlated with the use of different areas of the brain emerged in the 1960s (amongst the earliest being the study by M. Day, 1964).

In their 1980 presentation of NLP, Dilts, Grinder, Bandler and DeLozier (1980, p 17) suggest that all human experience can be coded as a combination of internal and external vision, audition, kinesthesis and olfaction/gustation. The combination of these senses at any time (VAKO/G) is called a 4-tuple. Kinesthetic external is referred to as tactile (somatosensory touch sensations) and kinesthetic internal as visceral (emotional and prioceptive).

The developers of NLP proposed that each sensory system was run by a separate area of the brain, and cues to its access were given by a particular eye movement pattern (1980, 81). The person’s representing of their experience in a particular language could be identified by the words (predicates) they use to describe their subject. For example, someone might say “I see what you mean.” visually, “I’ve tuned in to you.” auditorally, or “Now I grasp that.” kinesthetically. Robert Dilts (1983, section 3, p 1-29) showed that different brain wave (EEG) patterns were associated with visual, auditory, tactile and visceral thought.

Auditory Digital

The developers of NLP then suggested (1980, p 75) that auditory representation can be usefully divided into digital (verbal) and tonal. Auditory digital is described as a class of secondary experience, run from the dominant hemisphere of the brain and metacommenting on the other types of experience. The standard NLP diagram of accessing cues shows that auditory digital is actually placed on the left side (suggesting that all the accessing cues on that side may correspond to the dominant hemisphere, where verbal abilities are known to be processed).

The NLP eye cues model places auditory digital down, opposite kinesthetic, rather than level, opposite auditory tonal. Howard Gardiner argues that verbal thought should not be called “auditory”, as it is an entirely separate form of intelligence. He points out that the same area of the brain used by hearing people to generate verbal language is used by deaf people to generate sign language (Gardiner, 1993, p 52). Some linguists believe that language evolved out of kinesthetic- gestural communication. Chimpanzee language, for example, is more gestural than vocal (Temerlin, 1975, p 113-117).

Psychologist William James identified the Auditory digital or “articulatory” sense in 1890. He considered it a development of kinesthetic, arguing (James, 1950, Volume 2, p63) “Most persons, on being asked in what sort of terms they imagine words, will say “in terms of hearing.” ….Partly open your mouth and then imagine any word with labials or dentals in it, such as “bubble”, “toddle”. Is your image under these conditions distinct? To most people the image is at first “thick”, as the sound of the word would be if they tried to pronounce it with the lips parted….The experiment proves how dependent our verbal imagination is on actual feelings in lips. tongue, throat, larynx etc.”

Certainly, in terms of human development, language begins kinesthetically. From around the seventh month in the womb, the human infant responds with kinesthetic movements to each speech by its mother. By birth, every phoneme used by the mother (phonemes are syllables like “Ma” or “Gu”) elicits a unique movement by the baby. Every time the mother says “Yes”, the baby may twitch it’s left shoulder, for example. The baby learns language as a kinesthetic experience. Only later does it add its own imitation sounds to the process (Bernard and Sontag, 1947, and Condon and Sander, 1974).

Refinements to the NLP Model

Since 1980 there have been some proposals from within NLP to add other new fields to the list of sensory systems.

Cecile Carson suggested (1991, p 181-194) adding the vestibular sense (the feeling of where the whole body is positioned in space). Carson points out that this sense has previously been considered as a subset of kinesthetic. He reviews the characteristics of the previously accepted sensory systems, saying that similar characteristics can be found for the vestibular system. He identifies eye accessing cues (rotations of the eyes in the opposite position to body movements), predicate words, submodalities and a neurologically separate basis for the vestibular system.

Bay Butler (1998, p 37-40) attempts, by altering the definition from “sensory systems” to “perceptual systems” to add five new systems to the list. These systems are the external systems of sensing, connection, knowing, and presence and the internal system of dreaming. Butler suggests that there are specific predicate words for each of these systems, usually incorporated into kinesthetic. He does not identify any different neurological basis or eye movements for the systems.

Steven DeVore is an educational psychologist who worked with Dr Karl Pribram of Stanford University. DeVore points out (1982, p 13) that the oculomotor, trochlear and abducens nerves, which control eye movement, stem from the reticular formation, an area of the brain which acts as a sensory filter for the brain, deciding which messages are most important to pay attention to. This explains the existence of the eye accessing cues, and how movement of the eyes can select which area of sensory experience is being attended to. His studies suggested that there were also eye-accessing cues for taste and smell. Working mainly with sports people, he failed to identify an auditory digital area, believing that down left was normally the site of emotional (visceral) recall.

B. Evidence Of A New Visual System

Seven Intelligences and Five Senses

The 1980s saw the development of a new model for categorising ways of thinking. Howard Gardiner proposed (1993) the model of Multiple Intelligences, described in Anchor Point by Sarah Norton (1994). This model has become very popular in the field of Accelerated Learning. Gardiner’s basic idea was that there is not one intelligence, as measured by previous IQ tests, but several. In order to qualify as a separate intelligence (a definition he notes is still arbitrary), Gardiner (1993, p 62-66) proposed eight criteria:

  1. Brain damage to a specific area will selectively disable the intelligence, leaving others intact.
  2. There exist people with exceptional ability in that area alone (idiot savants).
  3. There is an identifiable set of core strategies associated with the intelligence.
  4. There is a recognised “normal” developmental history of the intelligence through the lifespan.
  5. The intelligence has plausible evolutionary value to humanity.
  6. The intelligence can be studied in experimental psychology.
  7. Intelligence tests can reveal the level of the intelligence.
  8. The intelligence can be encoded symbolically and thus the results can be passed on culturally.

As Sarah Norton points out, it is tempting to link the Seven Intelligences with the Sensory systems of NLP. If there is a biological basis for each system, the results should coincide. We believe that they do, although our own opinion about how they do differs slightly from Norton’s. Norton describes logical-mathematical intelligence as corresponding to visual. Our opinion is that this is true, but with an interesting twist. Rather than the standard visual modality, we believe the sensory system which corresponds to logical-mathematical intelligence is a separate system which we will call visual digital. Here is our correlation:

What Connects Left-Right Distinctions and Counting?

Gardiner describes logical-mathematical intelligence as characterised by the ability to handle long chains of reasoning, as achieved by Newton or Einstein. He admits (1993, p 159) that this intelligence is not as “pure” or “autonomous” an intelligence as the other six. Gardiner also has some degree of puzzlement about where Logical-mathematical intelligence is centred in the brain. He points out that the evidence of brain damage links logical-mathematical intelligence to some rather different skills (Gardiner, 1993, p156-158). Lesions of the parietal lobe of the dominant hemisphere, and the temporal and occipital association areas next to this lobe, affect mathematical ability. In particular, damage to the angular gyrus there produces a medical condition known as Gerstmann syndrome. In this, the ability to calculate is impeded, as is the ability to understand complex grammatical structures such as presuppositions. But what is really interesting is that the same damage produces an inability to tell left from right, certain difficulties drawing, and a difficulty with finger recognition (the latter being related to the counting problem). This suggests that the type of logic Gardiner is referring to is somehow linked to the ability to distinguish left and right, and creative visual tasks such as drawing.

In the field of NLP the ability to distinguish left from right has been of great interest because it is central to the condition “dyslexia”. The NLP solution to dyslexia has been twofold (Blackerby, 1996, p 153-155). Firstly to retrain the person’s recognition of left and right. Secondly to teach the person to use a Visual Recall spelling strategy (looking up left and picturing the word as it has been seen before). This visual recall strategy has been well researched (Dilts and Epstein, 1995, p 409-416) as a skill for both spelling and mathematics. In our experience, one session practising the visual recall strategy will often result in the person labelled “dyslexic” being able to spell perfectly.

What NLP is calling the “Visual Recall strategy” seems, by itself, to improve left-right distinctions. Is it appropriate, though, to call this skill “Visual Recall”? Often, the spelling strategy involves making the picture of the word brighter, bigger, or closer than it was in real life; even changing the colour or breaking the word into smaller pieces. And yet the strategy still works. The strategy being taught is in fact more like a visual version of the strategy for remembering words. We consider it to be a Visual form of Digital thinking.

How Scientists And Mathematicians Think

It is often assumed that, since the representational system most valued in scientific circles is digital, the thinking processes most used in scientific study must be mainly auditory digital. This is certainly not so for the developers of science. Albert Einstein, for example (quoted in Dilts, 1994-5, Volume II, p 48-49) says “I very rarely think in words at all. A thought comes, and I may try to express it in words afterward.” And “”The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be “voluntarily” reproduced and combined.” The inventor Nikola Tesla described his own visual thinking in rich detail. He says (Dilts, 1994-5, Volume III, p 369) that his visual field “becomes animated with innumerable scintillating flakes of green, arranged in several layers and advancing towards me. Then, there appears, to the right, a beautiful pattern of two systems of parallel and closely spaced lines, at right angles to one another, in all sorts of colours with yellow green and gold predominating….”

Physicist David Bohm describes his own experience of breakthroughs in a similar style (Bohm, 1983, p 51) “For example, one may be working on a puzzling problem for a long time. Suddenly, in a flash of understanding, one may see the irrelevance of one’s whole way of thinking about the problem, along with a different approach in which all the elements fit in a new order and in a new structure.” Werner Heisenberg describes his co-developer of Quantum Physics, Niels Bohr thus (in Gardner’s discussion of logical-mathematical intelligence, 1993, p 148) “Bohr uses classical mechanics or quantum theory that way just as a painter uses his brush and colours…. If he keeps the picture before his mind’s eye, the artist can use his brush to convey, however inadequately, his own mental picture to others. Bohr knows precisely how atoms behave during light emission, in chemical processes and in many other phenomena, and this has helped him to form an intuitive picture of the structure of different atoms: it is not at all certain that Bohr himself believes that electrons revolve inside the atom. But he is convinced of the correctness of his picture. The fact that he cannot yet express it by adequate linguistic or mathematical techniques is no disaster. On the contrary it is a great challenge.”

Auditory Digital and Visual Digital

The type of thought described by these mathematicians and scientists is visual; but it is not the recall of seen images or the construction of unique new images that predominates. It is the use of a series of (individualised) visual symbols, representing established concepts which can be joined into sentences (Gardiners “long chains of reasoning”) seen as whole “new structures”. Using the NLP notion of visual construct and visual recall, we would have assumed that the accessing cue for this process would be up right for most people (visual construct). In fact, repeatedly we have found that the accessing cue is up left, in the visual recall area. This makes sense once you remember that the scientific process is being run by the dominant hemisphere; the same side that runs speech (auditory digital).

Our attention was drawn to this type of processing by a small group of individuals who have attended our NLP Practitioner trainings, and access almost entirely in visual recall. These people often have similar “personality” traits to those who specialise in auditory digital accessing. They use a lot of digital predicates, and they have difficulty accessing direct sensory experience (eg using visual sensory acuity during a conversation). They tend to think of their ideas as appearing fully formed in their minds, similar to the “knowing” perceptual system described by Butler (1998). These individuals are most puzzled when asked if they think in words (much as Einstein was). Their overall life functioning is not always successful however, and one of them had been diagnosed schizophrenic, as a result of symptoms arising from visual imagery (the sort of imagery that Nikola Tesla might have been proud of).

The predicates and nouns used by these people as they describe their thinking are both digital and visual, but when asked, the person may deny that they are referring to words or to pictures. Examples from the writings of the scientists above include:

  •             signs and images which can be reproduced and combined
  •             a beautiful pattern of systems
  •             a flash of understanding
  •             all the elements fit together in a new order and a new structure
  •             an intuitive picture of the structure
  •             convinced of the correctness of his picture
Directions For Research

It seems likely that auditory digital thought is a new development in our species. As a meta-system, it has made human society possible. In the brain (and in the accessing cue diagram) it has taken attention away from direct kinesthetic experience, and shifted it to the experience of gestural or “labio-laryngial” (verbal) symbols. People who specialise in this type of thinking may find that it interferes with their kinesthetic accessing. Others, who use it less (such as DeVore’s sports players), may find that the auditory digital accessing cue (down left) still allows them to access kinesthetically (see fig. 3 above).

We suggest that visual digital is an even newer development. It has made advanced science possible. In the brain (and in the accessing cue diagram) it takes attention away from direct visual experience, and shifts it to the experience of visual/conceptual symbols. People who specialise in this type of thinking may find that it interferes with their visual external accessing. For most people currently, it is little used, and the visual digital accessing position (up left) is used mainly for visual recall. Installing the NLP spelling strategy is training a person in the use of visual digital thinking. So is much of what we do when using a computer. In typing this I was frequently manipulating visual symbols at high speed, without ever speaking or reading the verbal symbols I would once have used.

If we are right, the eye accessing cues of inventors and scientists will show a great deal of what would have been called “visual recall” accessing, particularly in situations where “visual construct” or “auditory digital” would have been expected. So far we have observed this in a small number of individuals. We invite NLP Practitioners to investigate this more fully. Furthermore, if we are right, then to enhance conceptual (“scientific”) thought, we would direct the person’s eyes up to “visual recall”, and not “visual construct”. We would encourage them to create visual symbol systems (their own alphabets) or to use visual symbol systems already available.

Thomas Armstrong (1993, p 107-108) lists a variety of ways to develop visual digital (logical-mathematical) thought, should you choose to do so. They include:

  • Play games such as Go, Dominoes and Chess
  • Buy and use a book of logic puzzles and “brain teasers”
  • Practise calculating simple maths problems in your head
  • Read the business section of your daily newspaper (and look up unfamiliar economic concepts)
  • Learn a computer programming language (eg BASIC, PASCAL) or learn to use a new computer program
  • Subscribe to a science magazine

Meta-sensory systems are a mixed blessing. They take us away from our sensory experience, but enable us to manipulate it in new and unexpected ways. Visual digital is much faster than auditory digital, and may be more suited to the computer age. As we do with computers, we can think carefully about how we use the visual digital sense for good. Like computers, though, visual digital is here to stay.

Bibliography/Sources

  • Armstrong, T. 7 Kinds Of Smart, Plume/Penguin, New York, 1993
  • Bernard, J. and Sontag, L. “Fetal Reactions to Sound” in Journal of Genetic Psychology, No 70, 1947, p 209-210
  • Blackerby, D.A. Rediscover The Joy Of Learning, Success Skills, Oklahoma, 1996
  • Bohm, D. Wholeness And The Implicate Order Ark, London, 1983
  • Bolstad, R. & Hamblett, M.,  Transforming Communication, Longman, Auckland, 1998
  • Butler, B. “The Lesser Used Perceptual Systems” in Anchor Point, September 1998, p 37-40
  • Carson, C. A. “The Vestibular (VS) System in NLP” p 181-194 in Milliner, C.B., De Lozier, J., Grinder, J. and Topel, S. ed. Leaves Before The Wind, Grinder DeLozier & Associates, Scotts Valley, California, 1991
  • Condon, W. and Sander, L. “Neonate Movement Is Synchronised With Adult Speech: Interactional Participation and Language Acquisition”  in Science, January 1974
  • Day, M. “An Eye Movement Phenomenon Relating to Attention, Thoughts, and Anxiety” in Perceptual Motor Skills, 1964
  • DeVore, S. The Neuropsychology of Achievement Study Guide, SyberVision Systems, San Francisco, 1982
  • Dilts, R.B. and Epstein, T.A. Dynamic Learning, Meta Publications, Capitola, 1995
  • Dilts, R.B. Strategies of Genius, Volume I, II, and III, Meta Publications, Capitola, 1994-5
  • Dilts, R., Grinder, J., Bandler, R. and DeLozier, J. Neuro-Linguistic Programming: Volume 1 The Study of the Structure of Subjective Experience, Meta Publications, Cupertino, California, 1980
  • Dilts, R. Roots Of Neuro-Linguistic Programming, Meta Publications, Cupertino, California, 1983
  • Gardner, H. Frames Of Mind: The Theory Of Multiple Intelligences, BasicBooks, New York, 1993
  • James, W. The Principles Of Psychology (Volume 1 and 2), Dover, New York, 1950.
  • Luria, A.R. Higher Cortical Functions In Man, Basic Books, New York, 1966
  • Norton, S. “Excellence in Education: NLP and Multiple Intelligences” in Anchor Point, June 1994, p 50-53
  • Temerlin, M.K. Lucy: Growing Up Human, Souvenir Press, London, 1976

Richard Bolstad and Margot Hamblett first wrote this article in 1998.