Big Data

Elegant Reasonism Based Deep Learning

Collect enough information from enough disparate sources and you can build a picture from the emerging patterns. That, in essence is the premise of big data. Stare at the image above. Do you know what it’s Achilles heel is? The short answer is context. They are utterly ignorant of implications stemming from commission of Langer Epistemology Errors (LEEs).  What all those folks using big data are presuming is that fundamental interpretative context will not change and they could not be more wrong. They probably ought to read LEEs Empiricism Trap. Deep Learning systems will follow a similar path if they too commit Langer Epistemology Errors (LEEs) and fail to grasp the implications of Elegant Reasonism. Global enterprise would do well to fully comprehend In Unification’s Wake, Part 05: Business Impact.

Rhetorically, stare at the above image and ask yourself if nothing in those databases can close to unification, exactly what are the implications around the conclusions I am making based on those analytics? The answer is you are being misled if you don’t have the proper process and framework guiding your analytics supporting an epistemology which seeks truth as a function of the unified Universe. Do you even comprehend how to establish anchor points to evidence chains manifesting relationship patterns? If you can mode shift your patterns what makes you think you’ll understand the insights produced from beyond encapsulation boundaries? For all intents and purposes you are in a round room looking for a virtual object in a corner and you have no clue where the exit is, but you think if you run faster you’ll ultimately find your prize. Newsflash, no – you won’t. To reconcile such issues you must be ready to integrate everything real and most have no clue what that means.

Here’s another rhetorical question. Why didn’t the best Artificial Intelligence (AI) systems on Earth invent Elegant Reasonism? The answer is rooted in philosophical interpretative context. Humans might be able to program AI systems. They might be able to exploit big data and deep learning. AI did not invent Elegant Reasonism for the same reason it took humans 2,000 years to accomplish unification. The problem is simultaneity of truth. If you have two theories whose consequences are all identical and both agree with experiment how does science determine which is the correct theory? The answer is that is not something historically we were prepared to deal with. Here’s a quick presentation of Richard P Feynman discussing exactly this point from his 1950’s lecture on knowing vs understanding.


Elegant Reasonism brings a new set of questions and forces discussions about fundamental interpretative context and whether or not they close to unification as a philosophical predicate priority consideration entering science. Quite suddenly we have more that one type of tool in the tool box. Strategically at issue here is which set of insights are enabled by the EIM establishing fundamental context. Another rhetorical question is whether the systems instantiate a self-reinforcing delusion or are they aligned with truth as a function of the unified Universe.




#ElegantReasonism #EmergenceModel #Unification #UnifiedUniverse #BigData #DeepLearning #ArtificialIntelligence #AI


By Charles McGowen

Charles C McGowen is a strategic business consultant. He studied Aerospace Engineering at Auburn University '76-'78. IBM hired him early in '79 where he worked until 2003. He is now Chairman & CEO of SolREI, Inc. ORCID: