Ancient Science: Real-time vs Offline Analysis
It is strange that the more I get into understanding how all this around me works, the more I get into data analytics and AI concepts. This is another of those topics that reads much like a data analytics topic, but is very much related to the formation of reality as explained in the various Sanskrit literature. I find myself more and more questioning “Should we really be studying creation of reality, origins of this world around us and the everything else using physics, maths, chemistry etc., that are considered to be real science or study it using logic and data as an algorithm with which this universe is created and sustained”? It definitely seems that using logic and data gives us a much better understanding and path to knowing the truth and helps us streamline our thoughts to know the types of experiments to conduct to check if it is true rather than approaching it purely from the perspective of study of observable universe. Logic and data help us go into the world of non-observable also, thus not excluding them from the circle of study.
In-fact it should not be surprising to us that this is case. If we take data from the various history books, the various archeological studies, the various experimental studies and start looking at the progress as these have moved forward we can find that data analytical algorithms seems to have applied to accumulate and precipitate various characteristics, various observables and so on as if certain types of events have been sieved over time to give prominence to some and take away the progress of some and hence forming this world around us. While many names have been given to this character such as evolution, survival of the fittest and so on, at the end they are all just algorithms that have played out in real-time over the years and centuries to form this around us.
A very key difference we need to understand as we see this playing out over the timeline is that all of this is in real-time as opposed to the offline analysis on pre-set data that is done in our current state of data analytics and AI algorithms. We tend to train an algorithm to learn parameters such as weights, constant values and so on using a pre-set function such as linear regression, or learn nodes of a neural net with multiple learned states that pass data through or not creating the illusion of variety in the output. This then is applied to other dataset to give us a predictable output based on what is learnt. This is offline analysis. Analysis done on offline preset data and an inflexible logic force learnt based on feedback based on expected output of the trained set. Once learnt this cannot be changed, what can only be done is data be used to predict. Another very important problem that this introduces is that given that it is inflexible, the learnt nodes can become very unwieldy and unmanageable making it impossible to get a very good neural network by the time it has learnt a specified set of cases and increasing the necessity of higher and higher processing power to learn more cases. Further the more cases are added to the same neural net, the nodes and levels increase causing it to become slower and slower in the decision process. What is needed to be understood here is that irrespective of whether a computer learns a bunch of processing nodes or some human is coding the same bunch of processing nodes, the underlying principle on which the program is built is the same and hence all the draw backs that is present in hand coding applies to even the machine learnt algorithms. We have not gone away from the age old pre-coded offline logic being applied to various sets of data. To overcome the problems we find in these offline data and logic based systems we need to rethink the basic principles to become truly real-time based AI systems. We have to learn it from the way reality works around us and by looking into ourselves for how we work.
When we look at reality around us we find this is not true. It starts with no data practically and builds its own logic in real-time adjusting the algorithm as more data and events occur and decisions taken and models changed over time giving us a very flexible multi-model output which can easily adapt to the situation occurring around us. What is even more important is that in real-time, the situation to which the models have adapted becomes an input and a learning to the model along with the output. The algorithms of such real-time systems cannot be the inflexible logic-oriented algorithms that we have currently. The rules that govern such a system and the functions that drive such a system must be a different set of basic principles and theorems. When we start searching for such set of principles, we find it in the Sanskrit literature easily transferred in the form of various mantras.
One very very interesting part of this whole play which is I think the underlying basic principle that each and every “movable/immovable or jagat as the Sanskrit literature calls it” from the largest to the smallest that we know has followed these principles and each and every accumulation has retained this. This is exactly what is told in the various Upanishads (Brhadaranyaka) in the mantra:
Om PUrNam-Adah PUrNam-Idam PUrNAt-PUrNam-Udacyate |
PUrNasya PUrNa-mAdAya PUrNam-Eva-AvashiShyate ||
Om Shaantih Shaantih Shaantih ||
This mantra is really very beautiful. The beauty of it is not revealed immediately after reading it. But, as we keep understanding more and more of the working of this reality, we find this mantra applying at every step and then you are stunned. I have typically translated this w.r.t the “tatvAmasi” which in itself translates to “that is the same as this” or “I am the same as that” or again many such translations that imply the same thing. It does not matter what you take in this observable world, it is the same underlying principle that has formed and is sustaining it, the same that is principle that even forms and sustains this “I” of me. From this context if we pick this above mantra and translate pUrNa to whole, then we end up with a translation that says: The OM of that whole is a whole by itself and that whole of the whole renders, If you take that whole and split it, that which remains is also whole and what is split is also whole and this is when OM is in shanti. Basically this translation applies to so many things. If we considered “infinity” we find this translation applies to it. Take infinity and render it, it is still infinity, take infinity and split it, you end up with two infinities, take two infinities and join it, you still have infinity. This is alway the translation I understood of this mantra.
Also note, I have left the translation of “shanti” as is here. I used to think shAnti was peace, then I thought it was silence and so on the translations kept going. But, finally I have come to the conclusion that I will leave it as shAnti because it does not have a single word translation in english. And in-fact shAnti is also a very basic principle that applies to every flow of events in this universe, when we consider it as a real-time analysis flow. It is a state where there is a peaceful, silent, unstressed flow or progress of a path for an event to go from that which has started becoming to the become state. And as many Sanskrit literature has said, once this flow has started nothing can be done about it. It is going to finish in the state of “become” because of the continuous work done that is triggered or what is simply termed as karma which as I have said before is highly misinterpreted. Again this is another basic principle of a real-time system. Work once started completes in its path irrespective of what is done.
But, as I was translating and understanding analysis and control of events around us, I find this above mantra can be applied in a different sense and in this context it is much more useful and powerful. I have always wondered why this mantra does not have an object applied to it. I now realise that it can be applied to just about anything that has similar properties and it would work really well. In-fact this mantra is actually describing recursive-logic which is the only way infinite can be achieved from finite and finite from infinite interchangeably. So, instead of applying this to “infinity” if we applied this to “equilibrium” then it becomes really very interesting. This is now telling me: If I take something in equilibrium and render it, it should still be in equilibrium, take something in equilibrium and split it the resulting two has to both be in equilibrium and take any two in equilibrium and join it and it is still in equilibrium!!! This then is the basis of how the reality is formed around us. Only that which is in equilibrium is rendered and the equilibrium is such that it is composed of all components that are also in equilibrium. If this is true, then what does being in equilibrium mean?
Thus in a real-time analytics system, if we apply this principle, then: If we take any dataset and if we can “define equilibrium for it” such that the same definition of equilibrium allows us to split data into smaller and smaller pieces and the equilibrium will apply in the manner the above mantra states it, then: we should in real-time be able to learn algorithms on pieces in real-time that are in in-equilibrium to bring them back to equilibrium and the output of the whole stream should be then a stream in equilibrium. The learning of such an algorithm becomes the deduced knowledge and this should also apply to bigger and bigger pieces correctly. For example as I have described in the blog “Derivatives and integrals in real-time“, by deriving derivatives and integrals by depositing data continuously one over the other, we were able to get the integral of derivatives in real-time without having offline dataset to learn the parameters involved in the function definition. And such a logic can be applied from any small dataset to any large dataset and we should be able to get the model represented in the form of data. Such should be the algorithms that are applied to real-time systems to get the correct AI logic. To also note, in such an algorithm, models can get linked in a sequence, can be disparate models and many such combinations can occur allowing the system to horizontally scale rather than vertically which is the reason for most problems of the offline data analysis system.