What Constitutes Proof In Context of Elegant Reasonism?

The process of developing an Elegant Reasonism proof employs credible evidence chains capable of being mode shifted in fully compliant context of Elegant Reasonism and reflect the unified Universe consistent with scientific testing methods which might constitute evidentiary proof. The challenge we have at the moment is whether or not those methods are up to that challenge. We have discussed Mode Shifting Evidence in the past.



Which very rapidly brings us to dealing with the basic communications regarding investigative insights and awareness. Our very strong recommendation to all investigators, and students, is not to attempt to convey insights absent the process and technoological framework used to develop them. Only when others understand how those insights were developed will they respect derivatives of that process.



Which leads us to something of a treacherous situation for global enterprise best characterized by caveat emptor, but in this case it is not the buyer beware but the apathetic. The Latin phrase segnis caveant doctiores, which means ‘apathetic beware those more knowledgeable’ comes to mind. The only defense anyone, anywhere, has is wielding Elegant Reasonism to greater affect through greater effectiveness.



Here we would like to present a hypothetical debate on the subject topic.

Implied in this question is that the proof in question is reflective of the unified Universe since that is the source of truth Elegant Reasonism seeks. Today, the publishing date of this article circa 2022, most, if not all, concerned with what constitutes evidence requires proof and that is generally construed as empirical truth. Part of the problem there are LEE’s Empiricism Traps, but we must explore both the process developing proof as well as the result constituting evidence. Another way of saying all that is we have to epistemologically justify the evidence such that everyone construes it as proof. These things are as important to those participating in the judicial professions as it is to scientists, as well as business enterprise and industry. We need only point to history to show that traditional approaches to science failed to epistemologically accomplish unification. The reason for that failure has been essentially obfuscated by Langer Epistemology Errors (LEEs). Sociologists will likely be fascinated watching the answers to the standard root cause analysis questions mode shift underlying meanings in debates at all sorts of trials. Trials do not just mean legal court room proceedings but scientific experimentation and others essential to establishing effective metrics and standards, all of which must be effectively mode shifted. The implication there is everyone must be able to effectively navigate the process decision checkpoint flowchart in order to recognize value derivation in context of the unified Universe. Philosophically we must also mode shift axiology which is the study of value derivation.

Unification demands an ability to integrate everything real. Philosophically that means an integration of philosophy itself and with science in addition to simultaneous reconciliation of everything real. That’s a pretty tall order. Many will emotionally desire to cite successes from historical manner of thinking as they struggle with their own paradigm shifts. They may very well transition through industry standard stages of grief coping with their paradigm shifts.  The underlying infrastructure of mathematics does not change with Elegant Reasonism as discussed In Unification’s Wake, Part 02: Mathematical Proofs. What does change are the parameter relationships being operated upon. The reason those things change is because fundamental foundational context changes EIM to EIM. The process and framework are designed to epistemologically exploit those distinctions.

Another aspect of proofs and evidence is justification and value derivation. Typically we do not justify what we do not care about and items of no value carry little reason to care. Because EIMs establish basic interpretative context EIMs which do not close to unification can not perceive beyond the boundaries of that context. Such EIMs are self encapsulating. All EIMs are considered encapsulated, by definition and rule. There are no mathematics within encapsulation boundaries that can operate on parameters established by a different EIM. Therefore a different type of framework had to be developed for that purpose. We call that framework: Translation Matrices. Strategically at issue here is communications of insights EIM to EIM in order to axiologically communicate value derivation relative to a given investigation’s goals and objectives. If some party in a conversation is not familiar with, does not believe in (e.g. is in denial), Elegant Reasonism then that person will never penetrate LEEs Gate on the Process Decision Checkpoint Flowchart and you will never convince them the insights being presented hold evidentiary value. The reason is their assumed EIM has no basis to establish context in which value relative to the unified Universe. What must happen is the simultaneous recognition of the relative value of all EIMs statistically weighted relative to and respective of the unified Universe and to do that requires epistemological comprehension of the utility process and the framework it employs.

Yet another issue are those who want to tweak some aspect of existing models believing that when they do so everything will fall in place. What these people fail to realize is the totality of context established by the constructs of a given EIM must philosophically close to unification as a predicate priority consideration. For example, status quo thinking doe not just not close to unification it can not close exactly because its core constituent constructs preclude closure. Specifically unification requires two key capabilites: the ability to employ a common geometric basis point for all real objects in every (e.g. all) reference frames and the ability to fully couple all forces relative to and respective of all of those same real objects. Nothing real can transition the spacetime-mass interface without first conversion to energy. That hard cold fact is governed by an equation needing no introduction here. That hard cold fact precludes accomplishing those two objectives required by unification; therefore, no EIM employing those constructs in that manner will ever close to unification and it does not matter how much money, time, or resources are thrown at a problem immersed in that EIM.  What must happen is those teams must mode shift their problem using an Elegant Reasonism based investigation and they must do it effectively. Formal investigations will likely employ ISO 9001 Quality Management Systems (QMS) standards and other updated industry standards to accomplish this task so others may duplicate their efforts consistent with scientific methods.

Elegant Reasonism based evidence constituting proof must be presented consistent with these principles.

For Example

These bullet point insights are developed through the process and mode shift to ultimately reconcile unification, but isolated here and for you the reader, likely out of context might make no sense until they are engaged by the process, framework, and then subjected to analytical scrutiny.

  • Isolated material particles are abstractions, their properties being definable and observable only through their interaction with other systems. ~ Niels Bohr
  • Mistaking abstractions for actual reality is a fatal epistemological error. ~ Susanne K Langer
  • Nothing real can transition the spacetime-mass interface without conversion to energy thus precluding a single, common, geometric basis point for all real objects in every reference frame
  • Reference frames, as historically defined, can not fully couple all forces with all real objects across the different types of reference frames used by tradition
  • Schrodinger’s Cat is dead
  • You think unification is exclusively about theoretical astrophysics? What does art appreciation have to do with unification? What does global economics have to do with unification?
  • What is consciousness? How do unification concepts link to central nervous systems (CNS) and our brains?
  • Rhetorically we ask if you know whether Einstein considered mass as variant or invariant? What are the implications and why?
  • The inability to reconcile black hole growth with rapid expansion of the Big Bang ultimately unravels and eliminates the inflationary theory
  • The speed of light is constant because of the system producing it not because of external dimensional limitation. Eliminate the limitation and Hubble is vindicated. Furthermore mode shifting the constant ‘c’ its definition shifts from the speed of light to that of Severance. Defined in this way the term Rapidity is also reshaped and becomes the cosmological factor. That in turn redefines both the age and size of our part of the unified Universe. Architectures of mass then provide structure at quantum levels. Quite suddenly quantum velocity vectors are understood
  • Inertial reference frames mode shift into Event Frames describing interacting systems and local frames describing isolated systems. Both may be nested.
  • Galaxies are flying away from one another because at distance gravitons suffer pole reversal. When this occurs galaxies present like poles to one another. When that happens they are repelled from each other. This sets up a condition where the cycle repeats incessantly.
  • Those statistical circles in the WMAP data are a curious artifact that can’t be reconciled? We think we did. Did you see Shoemaker-Levy 9 on its inbound vector to Jupiter? Same thing, just mode shifted.
  • Want to know why quantum computing delivers the results that it does? How many billions have been spent on that little endeavor?
  • Can you reconcile the Drake Equation with the Fermi Paradox?
  • Life is likely replete and common throughout the unified Universe. We are so not alone that begs the question why are they so hard to communicate with. Ah… Severance.
  • Is interstellar travel possible?
  • Can you explain why the arrow of time is always positive?
  • Holistically taking a compendium of data, experiments, and insights suddenly the unified Universe is characterized Bang to Bang
  • The fundamental constituents of the EIM closing to unification no longer pose an inhibitor to accomplishing unification
  • Does this mean we have to start completely over or can we mode shift what it is we think we already know? The Emergence Model closes and enables mode shifting through Elegant Reasonism. The process does not dictate you must use that EIM. The process only demands an EIM that closes must be included in your thinking. If you can come up with a better EIM that closes than The Emergence Model – knock yourself out and go for it. In the interim, that EIM is available to investigators to mode shift investigative parameters into alignment with the unified Universe.
  • Every experiment tested – mode shifts into alignment with the unified Universe
  • Bell Inequality Tests are no longer viewed as having succumbed to glitches few understood with elaborate explanations. What this means to average people not versed in science is that real objects can travel faster than the speed of light – those limitations are relegated to the realm of virtual constructs
  • Structure equals properties and properties infer intrinsic structure – structures equals architectures {of mass}
  • Can you ‘tweak’ models or do you have to systemically gang switch paradigms? When you do that to an interpretative model what do you also have to do regarding your thinking? Do you as an individual have to conduct paradigm shifts along with such changes in order to recognize pattern changes? You bet you do. The question is how to epistemologically integrate all this holistically.

Now if you think any single bullet point above is hard to reconcile, imagine reconciling 100% of them simultaneously. Accomplishing unification was no small feat.

What happens in science when two theories are simultaneously found congruent with experiment and have all consequences the same? How do you know which theory is more correct? Here Richard P Feynman discusses knowing vs understanding. The observation he makes about science demands holistic comprehension within the requirements handed us by the unified Universe.



What must be remembered is that before the term ‘science’ was invented and employed those studies, efforts, and experiments were referred to as the philosophy of nature. Understanding that unification demands complete integration of everything real we are then forced into a situation when science can not determine a course of action we must swim upstream (or back to our roots) and ask some very hard questions. There is a very long list of bullets we could put here, and to those vested in status quo thinking will not accept them until they are confronted with the utility process, framework epistemologially and astoundingly when we do that all of these various concepts and constructs dovetail into alignment with the unified Universe. Expect those strongly vested in status quo thinking to have to transition through standard stages of grief coping with all of this. Consequently, considered in this context, we have very high confidence that Unification has been accomplished. In hindsight, we are suddenly confronted with something else more stunning. Many, if not most, if not all, expected these insights to be developed by theoretical astrophysicists. They were not. They were developed by entrepreneurial small business and the reason for that is just as simple. We were not vested in institutionalized ideas. Out of the box thinking was our normal environment. It did not bother us in the slightest to ask hard questions. We have been performing systems engineering functions for decades and with some of the most complex systems on the planet. That’s why we saw the solution when so many others failed. We understood QMS standards and quality metrics. We understood processes linking out to manufacturing real hardware which then had to run software which often dealt with virtualized concepts and realms. If one had to pick why we saw this evidence when others did not – these would be those reasons.

Communicating all this to those unfamiliar with the issues is akin to the story Plato told in book seven of The Republic in his allegory: The Cave. He wrote that over 2,000 years ago. These are not new problems by any stretch. What is new are the tools, methods and processes to minimize their obfuscating tendencies. We now have an approach, consistent with the scientific method, to improve communications, justify results and insights and then build on those insights. Even more astounding is that when we understand the nature of paradigm pattern shifts in neural networks we are confronted with something we call NNRP. The self-clarifying nature of Elegant Reasonism  also happens to begin aligning our very neural paths into alignment. What does that mean? It means that as you comprehend these various factors you will naturally improve your ability to integrate and employ them conversationally and dynamically in your discussions and debates. How’s that for cool?! It’s almost like the unified Universe wanted to you have these capabilities all along.

Unification demands an integration of everything real. Mode shifting evidence through the utility process and framework to fully compliant treatise will present civilization the capability and opportunity to perceive and engage the unified Universe. You will be able to see as you look. Penetratingly so and across all scales of the entanglement gradient. Understood in this way our mathematics are more effective. We understand why and how EIM encapsulation boundaries must be dealt with in order to enable effective analysis. Logic traps disappear. Seemingly incongruous scenarios evaporate. Paradoxes vanish and evidence we already have in hand in many cases is rendered with clarity. Elegant Reasonism is the path forward.

The Original Systems Review

The notes from the original systems review are available through this network presence. First time readers here should know that unification was not the original objective, but once certain factors were realized it was a prime motivator to keep going. The discussion here though is how did that motivation manifest itself and why we took the actions we did. Perimeters were the initial motivators and what we tried to understand originally dealt with analysis procedures and how to mode shift what we found in the field. Much of that can be found in the notes from the original systems review.  To us those field reviews only strengthened what we were learning at the time. Expectations were validated. We stopped looking down for answers and started looking up and out.

Penetrating to Industry

There are many aspects of this topic we consider to be proprietary, but at the  end of the day it is really what we are all after. That is to say, ok ‘so what?’, what difference does this really make. Answering that question will likely take the fullness of time and curiosity of our species. What it means immediately is that because of Elegant Reasonism we have new tools in the toolbox. Those new tools however also mean that the normal aspects of business process reengineering and metrics must now be mode shifted along with the evidence we think we are looking at. Our experience is that those activities almost always result in a shift not only in the thinking behind those activities but what is necessary in order to engage reality with the new insights. Pretty much everything is turned inside out when we stop looking at the medium to focus on those real architectures in the reference frame manifesting objects.

Do we now have equations representing evidence chain linkage between all fundamental forces? In a word, yes we do, but that is no longer the whole story either. The answer to why that is true has to do with understanding what constitutes those architectures of mass representing those real objects in the reference frames. Right now we have no clue what those architectures actually look like. We can look at the cogent description of M5, for example, and see the high level relationships created there, and while necessary in first recognition it lacks sufficient detail to experimentally explore those realms previously obfuscated by LEEs Empiricism Traps. What is now needed are detailed understandings of the architectures manifesting not just real everyday objects but the subatomic particles comprising them. In fact we need detailed architectures for every real particle. Ironically we need to also increase our understanding of the words and language we use to engage those explorations, and we must be careful in those proceedings. Terms like virtual do not mean particles are not real just very short lived because of their destructive resonance. These though are just examples of the work now needed. We have asked the National Science Foundation to allow us and to help us build the most powerful R&D computing platform on Earth for exactly these types of explorations in order to fold those insights back across industry and global enterprise. We believe doing so will revolutionize industry and revitalize stagnant markets.




#ElegantReasonism #EmergenceModel #Unification #Evidence #Proof #Mathematics #EvidenceChains #Logic #Philosophy #Science #CNS #Brain #Epistemology

By Charles McGowen

Charles C McGowen is a strategic business consultant. He studied Aerospace Engineering at Auburn University '76-'78. IBM hired him early in '79 where he worked until 2003. He is now Chairman & CEO of SolREI, Inc. ORCID: https://orcid.org/0000-0003-2439-1707