The ancients give a lot of emphasis to speech as opposed to any other senses such as sight or hearing.
For eg., in the Chandayoga upanishad, in Sanatakumar’s instructions to Narada, one of the phases on which he asks Narada to meditate on is
"Vag-vava namno bhuyasi, vag-va rg-vedam"
Due to speech has come the name, due to speech is rig-veda..
Another e.g., in Japji Sahib:
asaNkh naav asaNkh thaav agamm agamm asaNkh lo-a. asaNkh kehahi sir bhaar ho-ay, akhree naam akhree saalaah. akhree gi-aan geet gun gaah, akhree likhan bolan baan. akhraa sir sanjog vakhaan.
Countless names, countless principles, countless realms, so many that the head spins. Due to words is the name, due to words is the sentence, due to words is intelligence, songs and tunes to sing. Due to words is the writing and talking and due to words is the fate that is written on our heads.
It has got me thinking why? What is so different about speech compared to the other senses? Speech is the only output sensory organ all others are input sensory organs. Yet, what is so important about speech?
It is a foregone knowledge of everyone as to how we sense anything. The sense organ sends nerve impulses to the brain, the brain has a translation centre that translates these signals into thoughts. So, while we think in a certain language the underlying operation of the brain is electrical signals. The strange question that arises is “when I say a word such as “forest” does the same set of electrical signals get generated in every human being who hears it?” Does every human being perceive speech the same way? Looking at it differently, if I coded in “C” language and compiled it in windows OS I get a different sequence of binary executable when compared to a linux OS. The executables generated vary by the processor the OS runs on, intel has a whole different set of instruction sets generated while AMD has a whole lot of different instruction sets generated.
In speech we have two sets of electrical signals, one set in the producer who encodes the signals in their brain and one set in the receiver who decodes the signals. Just as a radio where there is a transmitter and receiver, but not as simple. The radio signals pass through pre-determined encoders and decoders and hence except for noise we can be sure that the translation of the signals will be exact. Can we be sure in the case of human beings? A very simple processor such as a computer itself has such variations can I look at a complex human being and even consider the idea that the person listening to me is having the same set of signals generated as my brain?
Language has definitely created a huge of barrier for us. What can be a mechanism of communication that prevents this from happening? Can brain signals be perceived at all by other brains? A very interesting question isn’t it?
How can I as a person transfer the vibrations that occur within in my brain in the exact form to the next human being? There have been experiments where the brain vibrations at one location in the world has been recorded when they are thinking and this transmitted to another human being in another location using the internet as the communication mechanism. Now, I wonder what will happen if the transmitting person thought about something that the receiving person has not heard of at all? Will the vibrations transmitted be missed? How does explanation occur here? Again, does this kind of communication imply that there cannot be mis-understanding between two people if it can be achieved?
Are thoughts really vibrations and electric signals within our brains? Then why do we need an external translation unit such as speech to convey the meanings? Why cannot we just do something similar to a radio transmit/receive to communicate? Why have we as people developed a convoluted mechanism of communication? Is it because, only then will the required entropy introduced to take this evolution further?