Enabling, Enhancing, Engaging

One can not engage what one can not perceive, and in order to perceive it you must be enabled. That ability to perceive must then be enhanced in order to illuminate to illustration.

An Enabling Utility Process

Elegant Reasonism is an enabling utility process employing an analytical framework and represents more than simple learnings exactly due to the recursive nature of systemic paradigm stacks works to identify and eliminate Langer Epistemology Errors (LEEs), (which can ensnare learners within obfuscating logic traps). It is for these reasons that the Process Decision Checkpoint Flowchart is designed in the recursive nature that it is. Each must be mode shifted in order to illuminate many, if not most, if not all, higher ordered ideas to illustration in Treatise relative to the unified Universe. The more restful a given domain of discourse or its constituent detail set is the more true this becomes. Two examples are art appreciation and economics, neither of which would traditionally entertained in discussions about the unified Universe. Unification vitally must integrate (e.g. credibly empower) everything real, no matter how restful. We argue that Susanne K Langer’s body of work enables the former example whereas Ludwig von Mises body of work enables the latter example. Both are listed on our extensive list of acknowledgements.

It is necessary but insufficient to recognize the process and methods which employ the analytic framework satisfying the requirements of unification. We must also understand why the traditional Encapsulated Interpretative Model (EIM) fail to accomplish the same. While the traditional EIMs; M0, M1, M2, and M3 do not close to unification because they do not reconcile criteria necessary to do so. Distilling that criteria for simplification reasons down to two fundamental issues:

  1. the ability to employ a common geometric basis point for all real objects in all reference frames, and
  2. fully coupling all real objects and forces within the same reference frames relative to one another.

The common thread throughout the traditional EIMs are the common constructs on which they are based. Those constructs were envisioned by Albert Einstein beginning about 1905. His papers began being accepted for publication in that year. We pause in this retelling of history to point out that we are analyzing these events from the precipice of present day and not with the awareness of the day and it is therefore not fair, proper, nor correct to pass judgement on anyone of that period using modern criteria. Setting aside for a moment all of the successes those insights provided civilization we must also ask what other major domains of discourse were created, developed, and matured over those intervening years. One relevant here are all aspects of information sciences. Others include more insightful reviews of philosophical arts. Greater understanding that unification demands the ability to integrate everything real along the entire entanglement gradient and being able to reflect aspects of unification both from the emergence and convergence vectors of that gradient. We all know that the traditional EIMs listed above do not close to unification. Many hope that one day insights will be developed such that one or more of them do close, but that hope is in vain. It is in vain exactly because the core constructs on which they are based preclude accomplishing even the two points listed above. When we speak of the traditionally envisioned EIMs we often describe the core constructs on which they are based. What is rarely discussed is the necessary interface between those constructs. For example, nothing real can transition the spacetime-mass interface without first conversion into energy. That fact is governed by a formula needing no characterization here. Analysis of this scenario requires comprehension of the abstractions involved. We must then comprehend modern concepts and insights. Notably that something that is logically correct may remain physically different. We must also recognize that mistaking abstractions for actual reality is a fatal epistemological error. Taking these insights then we realize that it was necessary to develop a net new inventory of abstractions which would allow the criteria associated with unification to be reconciled. We also need to rememeber that unification was never Einstein’s objective or goal when he created Relativity. He was, at the time, working on a completely different problem; which he solved through the logic of his thought experiments with genius brilliant execution.

Enabling The Path Forward

3 no 4

The situation then required a solution which had evaded all of civilization for over a century. More than that it required an understanding of why civilization had missed the clues necessary to recognize the answer. Why had so many very smart people not seen the answer? The answer to that question was actually answered in the previous paragraph. Susanne K Langer, in 1948, was the first person to codify the symbology of abstractions and make the insightful point that mistaking abstractions for actual reality is a fatal epistemological error. We now call these types of mistakes: Langer Epistemology Errors (LEEs) in her honor for exactly that reason. Because we all thought those abstractions were real we stopped asking questions that probed beyond those barriers. Deeper insights eluded us all for essentially those reasons. The more successful we became using the science that ensued, the tighter that trap became. We were in essence blinded by our own successes which had been instantiated by commission of Langer Epistemology Errors by essentially every human on the planet. The only one who recognized the issue well enough to write about it was Susanne K Langer. Ironically the book she wrote where that insight was made was not in any physics journal. It was a book about art appreciation and that is the source of that irony. That such a book would hold the keys to accomplishing unification. If there was ever an example for of the need for multidisciplinary approach to education – this is it.

McGowen (assigned inventor of Elegant Reasonism) by shear coincidence was ideally suited to reconcile these issues. He studied aerospace engineering in school, but was hired into the information technology industry before he finished formal academic training. The next three to four decades he worked on many key projects, with global enterprise. He was trained as a business process re-engineering professional with tenure in education, knowledge management, business planning, global market intelligence and more. Certainly the technologies developed during those times was no stranger to him. It was then no great surprise that he was able to draw on aspects of that breadth of experience to tackle the issues the 2004 systems review required. Trained in Systems Engineering McGowen was well versed in logical and physical views of systems. He was also familiar with high energy physics from studies into exotic space propulsion systems. Everything clicked into place for him one fateful day standing from his office chair holding one of Einstein’s papers he muttered ‘that makes sense’. Originally he intended simply to refill his coffee cup, but he just kept muttering recursively that same phrase over and over and over again until it became ‘that makes logical sense’. In that instant he recognized the logical nature of what Einstein had created and he recognized the implication it meant relative to mistaking abstractions for reality. In that instant he also recognized the associated freedom granted to purse an alternative means to reconcile the fundamental issues. However that alternative could not divorce itself from any previous successes. Having spend about eight years in corporate education roles, and being an open water scuba instructor as an avocation, he was well versed in instructional systems design. His corporate tenure had at one time found him one of three people responsible for implementing Baldrige quality methods across that organization and which resulted in attaining a Bronze level national award for that effort. Teaching such programs meant awareness of the failure to comprehend and apply associated concepts can cause harm or worse. Any new approach had to:

  • simultaneously preserve what has already been accomplish {preserved through P1.0’s historical review},
  • recognize all physical phenomena existing, new and even as yet undiscovered {Concept Sieves: EMCS01, EMCS02, & EMCS03, …},
  • translate all experimental data viewed as empirical evidence {Mode Shifting considering PDCF},
  • comprehend and explain epistemologically how all this is reconciled {TBA – in progress – recovering from computer catastrophic failure},
  • conform to industry accepted standards in order to justify strategic business planning value instantiation, and
  • create education road maps so that our progeny could follow this path into an enduring future {Elegant Reasonism employs Bayesian Analytics for this purpose}.

What was known at the time was that traditional thinking did not close to unification. The missing pieces were completely and utterly obfuscated. The question everyone wanted answered was why and how that happened. Implications of all that came later working the six listed issues. Two of the main technologies ultimately integrated came into being almost simultaneously from McGowen’s tenure in information sciences. The ISO 9001 Unification Tool is a relational database employed to codify quantified abstractions employed in modeling. Translation Matrices were a derivative of Translation Tables employed by Internet service provider connection point servers. When you are sitting at home and connect your computer to the Internet, it connects through these types of systems. On every such system is one of these types of tables. Those computers use those tables to translate your human readable URL address into something which the computers can use to connect clients and servers to each other. McGowen, having worked with such systems, knew immediately that similar methods could be employed on this problem. The initial juxaposition of traditional thinking relative to various investigative paradigms became what we now call the 2D Articulation Layer of Translation Matrices which were ultimately the backbone of the analytical framework employed by the processes and methods of Elegant Reasonism.

The past failures had not recognized Langer Epistemology Errors (LEEs) because they had really never considered encapsulation/iteration rules. Everyone, including McGowen, was so familial with commission of Langer Epistemology Errors (LEEs) that “tweaking theories” was never given a second thought. Here we see a short clip of one of the most notable scientists in recent history; Richard Feynman, discussing exactly that in a 1950’s lecture about knowing vs understanding:


Illuminating Non-Obvious Insights

Insights associated with unification are more than ‘non-obvious’ they are utterly obfuscated by traditional epistemologies. Part of the reasons for that obfuscation are commission of Langer Epistemology Errors (LEEs) not only by scientists but by liberal arts philosophers as well. ‘Tweaking’ of models instantiating the unified Universe should only be allowed in iterated instances enumerated EIMs consistent with industry standard ISO 9001 QMS standards. The nature of tenets in belief systems changes completely when EIMs change. Only when such changes are properly quantified and codified through such approaches are the systemic implications illuminated and subsequently made available for illustrative purposes. That’s why the Process Decision Checkpoint Flowchart is designed for recursive review. Insight development is anything but linear. There are considerable insights that are not made available until well into the process. Only when holistic review has been enabled by the process can some aspects even be perceived much less entertained. Therefore it became more than necessary to consider unification criteria as a predicate priority consideration entering science, not after you were already immersed. EIMs establish fundamental foundational context for all interpretative considerations relative to and respective of models. Even Quality Management Systems standards had to be mode shifted in order to properly assess models effectiveness beyond encapsulation boundaries. That’s why Elegant Reasonism rules associated with execution of Translation Matrices vary analytical layer by analytical layer. Only in lower analytical layers are we allowed to view insights that only become available because we stand on the precipice capable of perceiving a plurality of EIM manifestations all instantiating exactly the same Paradigm of Interest/Nature (POI/N). That is not something which can be taught in rote fashion. We must understand the full implications not just academically but cognitive application of the process in order to conversationally employ critical situationally aware thinking. This investigative environment is highly dynamic and requires practitioners to recognize the systemic nature of core constituent constructs in discussions which are far removed and very much downstream of those foundations. For these reasons rote tactics will not prevail, the process and methods must be exercised through the analytical framework in order to properly assess investigative requirements across encapsulation boundaries.

For example, taking any of the traditional epistemologies and then also the collective set of traditional Encapsulated Interpretative Models (EIMs) try to write a single cogent paragraph from which everything real may be made manifest either directly or indirectly no matter how restful. This task is impossible for those traditional epistemologies employing traditional Encapsulated Interpretative Models (EIMs) exactly for the reason that they not just can not close to unification, but that they will never close to unification. They will never close to unification because their core constituent constructs preclude accomplishing that task. The Emergence Model’s M5 logical view, on the other hand, does have a single cogent description.

Evidence Chain Linkage


Evidence chains link concepts reflecting reality across the entire entanglement gradient from smallest scales to largest in both emergence and convergence vectors across the entire analytics involved Encapsulated Interpretative Model (EIM) to EIM. Because encapsulation precludes EIMs from directly referencing one another we must perform such comparative analytics in juxtaposition within the analytic framework exploited by the utility process. Each so encapsulated EIM establishes fundamental interpretative context for 100% of the variables compartmentalized and otherwise intrinsically constrained. These EIMs, relative to and respective of Paradigms of Interest/Nature (POI/N), will each then manifest or instantiate that POI/N very likely employing a unique pattern. Studying those pattern differences as a part of the process and methods is likely to engage investigative teams for quite some time to come. Because many of the aspects are beyond the threshold of human perception due to issues of scale these are not aspects which can be ‘taught’ necessarily. They must be developed through investigative process and conform extensively to established standards.

Enabling Mode Shifting

enabling mode shifting
Enabling Mode Shifting

Exactly for the reasons that core constructs of any given EIM manifest context constrain and compartmentalize perception are the reasons that the utility process of Elegant Reasonism enables perceptions beyond the threshold of human sensory systems exploited by empiricism. Empiricism and empirical evidence are necessary but insufficient due to the potential for commission of Langer Epistemology Errors (LEEs). The article LEEs Empiricism Trap discusses many of these factors. Without the process one lacks the opposing view of other EIMs.  Without the process evidence chain patterns of a plurality of EIMs is completely absent. Without the process one lacks analytical justification for assertions. Without the process, compliance is meaningless. Without the process there is no way to mode shift metrics illuminating or illustrating checkpoints in the process flow.

Teaching Absent The Process Is A Mistake

Teaching absent the process is a mistake of arrogant proportions. It expects that 100% of all factors is already known. What it ignores all together is how the answers to standard root cause analysis questions shift EIM to EIM. If we asked students entrenched in M1 thinking, for example, “why are Newton’s laws true?” they might answer with the mathematical formulas for the various laws. If we ask that same question to someone immersed in M5 thinking, for example, we will get references back to the cogent description of M5 which illuminates those fundamental reasons relative to the constituent core constructs of that EIM. The depth of understanding between the two is profound. Over time humanity may learn to teach this better and civilization will be replete with cognition of the process and these issues may seem trivial or trite to them at that time, but we are very far removed from such critically situationally aware thinking. The Elegant Reasonism utility process demands a plurality of EIMs be employed for these and many more reasons. It is necessary not just an ability to illustrate closure, but to illuminate why other EIMs miss that mark for whatever reason. It is necessary to statistically weight those failing concepts in order that effective education road maps might be created across curricula spanning all educational levels.

Mode Shifting Einstein

What Einstein created with the body of his work and 100% of all subsequent empirical experiments were absolutely logically correct, and therein lay the strategic clue necessary to accomplish unification using the utility process that is Elegant Reasonism. Read “Mode Shifting Einstein 01” to begin your journey in understanding the process and its framework. Transformational leadership techniques often illustrate that a large portion of paradigm shifts occur through simple comprehension that a goal can be obtained, even if by competitors. Work to enhance what has been accomplished by others often leads to greater internal comprehension and individual cognition. Unification was never an original goal or objective of Einstein’s original work. He was working on the reasons the Michelson-Morley Interferometer Experiment was true and he did that brilliantly with great genius. Everything we do here is in celebration of that genius and works to extend it to an enduring legacy.

Paradigm Shifts Are Individualistic

Each individual has different life experiences and must affect the necessary paradigm shifts in order to gain the precipice of unification and they are accountable only to themselves for the effectiveness they attain. We can provide you the fodder for contemplative consideration but only you can integrate them into your thinking. Ironically as you accomplish the necessary paradigm shifts you will be changing your thought patterns. That is to say you will be changing the critical situational awareness thinking employed by your brain which will intern shift the interpretation of the sensory inputs provided by the central nervous system (CNS) across all Brodmann Areas. This ‘realignment’ of paradigm patterns is something we call Neural Network Reconfiguration by Programming (NNRP). NNRP is natures way of providing automatic enhanced recognition of phenomena and self-clarifying investigative processes.

Historical Review Of These Issues

These issues are not new. Plato discussed the ramifications of similar issues in Book 7 of The Republic in his allegory The Cave:

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Elegant Reasonism practitioners are encouraged to understand that those with whom they engage on these topics will likely have similar reactions for all the same reasons. Expect your audiences to transition through the industry accepted standard stages of grief as they work to cope with these new insights. Many people are vested in status quo nests quite familial to them. Elegant Reasonism constitutes a disruptive technology to those people and anyone wielding this technology should do so transformationally with great empathy and compassion.

Executive Summary

Transformative insights have been known to philosophers and influencers for thousands of years as have their implications. In that regard Elegant Reasonism is anything but new. However, what we now realize is that it takes this process, not just the insights it made available to us, in order to mode shift what it is we think we know into alignment with the unified Universe. We must remain vigilant, diligent, and execute these processes and methods with adherence to compliance rules and industry accepted standards. The implications could not be greater for shareholders, investors, or global enterprise for all the reasons discussed In Unification’s Wake, Part 05: Business Impact and likely a great deal more than we thought of at the time.


Educator’s Shop


#ElegantReasonism #EmgerenceModel #Unification #Empower #Enable #Enhance #Engage #Perceive #Interpretative #Encapsulation #EIM



By Charles McGowen

Charles C McGowen is a strategic business consultant. He studied Aerospace Engineering at Auburn University '76-'78. IBM hired him early in '79 where he worked until 2003. He is now Chairman & CEO of SolREI, Inc. ORCID: https://orcid.org/0000-0003-2439-1707