Argument StructureArgument Structure

Mode Shifting Arguments

Langer Epistemology Errors (LEEs)
Langer Epistemology Errors (LEEs)

This article attempts begin the discussion needed to mode shift standard information as presented, for example, on the Stanford Encyclopedia of Philosophy. Others will need to complete this journey and story. The single common thread throughout every argument that has ever taken place in human history is exactly that: human physiology and the constituent systems involved in the processes. This is a primary reason why epistemologically Empiricism has ruled science for centuries. So, much so that for the last several hundred years, most especially in the last century or so. Part of the reason for the growing popularity of the schism between philosophy and science has been due to technological advancements associated with experiments and the laboratories in which they were conducted.

The global challege is making Langer Epistemology Errors and not recognizing the implications and ramifications of that commission. Mistaking abstractions for actual reality is a fatal epistemological error.

Those who simply want to role their sleeves up and jump in on this would do well to pause and take a breath for a moment. Langer Epistemology Errors (LEEs) are epistemological errors that occur because abstractions were mistaken for actual reality. The implications and ramifications of such errors are, according to Susanne K Langer, fatal. We must then remember that Elegant Reasonism is specifically designed to minimize and hopefully eliminate LEEs. Does this mean abstractions can’t be used? No, it doesn’t mean that. It means you must not ever forget that abstractions have a tendency to insulate and isolate higher ordered ideas from lower ordered detail. When related abstractions are stacked, and LEEs are committed, fade of replication issues begin cropping into our thinking, experiments, and instrumentality.

Elegant Reasonism is a utility process employing a technological framework supporting an epistemology whose goal and objective is to reflect truth as a function of the unified Universe as a philosophical predicate priority consideration entering science and which produced the first fully compliant Encapsulated Interpretative Model (EIM) closing to unification: The Emergence Model whose logical view M5 is instantiated by its real view M6. For reasons beyond the scope of this page our focus and attention at this moment in history is M5 and not on M6 until such time as civilization proves its maturity to do so. Those reasons will become clear in the fullness of time. For right here, right now, the discussion is how what we did philosophically impacts the science of argument.

There is too another factor emerging on the scene of late and is driven by artificial intelligence systems development and it is a measure of the amount of time between synaptic patterns and concept cognition. The metric helps both hardware and software development but misses the point context implies. Such discussions center up on the concept of Cognitive Velocity. Skills revolving around critical situationally aware and conversational Elegant Reasonism is something being addressed in purposeful curricula design.

Traditional Philosophical Assumption

True vs Truth
True perspective of truth.

The traditional philosophical assumption has been that humans are capable of directly perceiving reality and that assumption is patently false. If it were true rules for pilots would not distinguish between viusal flight rules and instrument flight rules, there would be only one set of rules. There is not one set of rules for pilots there are two sets and there are those two sets for exactly the same reasons being illuminated here even though some of the details are different. Langer Epistemology Errors (LEEs) occur when humans mistake abstractions for actual reality. That’s what those types of errors are, but those errors are not the only reason to back up and reconsider the traditional assumptions. Human physiology is such that our biological sensors are wired to our brains via our Central Nervous Systems (CNS) where inputs signalling the brain automatically manifest abstractions in order for us all to deal and cope with the world around us. To say that another way, we humans are inside the test tube whose contents we are working to characterize. The implications and ramifications of that situation are such that commission of these errors leads directly into a logic trap of epic proportions we call: LEEs Empiricism Trap. To be clear, empiricism is necessary but insufficient to gain the precipice of unification. Another factor in all of this is made manifest by how we bring everything together for consideration and by everything, we mean it all. We humans are intrinsic to what is and that reality must be held self-evident. Then there is yet another aspect going on, perhaps even a bit more subtle then even those insights. Encapsulated Interpretative Models (EIMs) establish intrinsic interpretative context. What is insidious about EIM context is that it can absolutely be logically correct yet reality can remain physically different. Real systems can have multiple logical views that are closed loop on logically correct intepretations which do not reflect the entiriety of that real system. The only requirement there is that the real system instantiate the logical view of it. Logical correctness of a real system does not imply completeness and the hard cold fact is that if it does not close to unification, then it is not complete. Period. Susanne K Langer once said something to effect that new knowledge demands a whole new world of questions. One new question is “are you sure that any previous traditional experiment or paper was tied to its interpretative context or to the actual real unified Universe?”. If your interpretation can not close to unification, then you have your answer, whether or not you llike that answer. That absolutely does not mean those previous efforts were wrong in any sense but it does mean that they were only logically correct in the context they were conducted. Strategically at issue is what happens when that context changes EIM to EIM. It is for these and other reasons that the concept of unification must be a philosophical predicate priority consideration entering science, not after you get there because by then it’s too late because one is already staring at the bark on the trees and has missed the forest. The concept of unification demands the credible manifestation of absolutely everything real, including philosophy and science. Relevant here, again (because we keep having to point this out) is a partial list of areas within the domain of discourse called philosophy and the point here is that 100% of philosophy, inclusive of what is not listed below including what is must all be Mode Shifted in order to seek truth as a function of the unified Universe. It should be noted that epistemologies distinguish one another essentially by what they consider their essential source of truth. At the outset here we would like to remind all readers and those regiatered here that our User Library holds some of the best thinking from all of history and we acknowledge their contributions, most especially Einstein, Langer, and Okun. Holistically what is here only is here because of all those who came before us. We just connected the dots between what they did.

  • Philosophy (study of everything)
    • Axiology – philosophy of value derivation
    • Epistemology – philosophy of knowledge
      • Constructivism holds the source of truth is not a function of passively perceiving it within a direct process of knowledge transmission, rather they construct new understandings and knowledge through experience and social discourse, integrating new information with what they already know.
      • Elegant Reasonism holds the source of truth as function of the unified Universe
      • Empiricism holds the source of truth relative to the ability to repeat, duplicate, and share observed insights and experimental data
      • Rationalism holds the source of truth as a function of rational logic
      • Religous epistemology sources its truth as a function of individual religions from around the world.
      • and this list goes on from here….
    • Ontology – philosophy of being
    • Science – philosophy of nature
    • Supervenience – philosophy of order and priority
3 no 4

The traditional philosophical assumptions also have historically also committed Langer Epistemology Errors (LEEs). Everyone who has ever lived on Earth fell prey headlong into LEEs Empiricism Trap. The issue is not whether or not we are in that trap. The issue is how to get out into a realm where the unified Universe can be addressed objectively and dispassionately. The answer to that question is: Elegant Reasonism. There is a caveat and catch and it is made manifest by a new term: Mode Shifting. LEEs create a philosophical situation where we can not touch reality but we can look at it from all sorts of directions. We can inspect all we like how it instantiates logically correct instantiation models of it. We can note and document to standard those behaviors and we can share that amongst ourselves for scientific purposes. The basic approach idea Elegant Reasonism takes is to surround that instantiation with different EIMs and then to subject the holistic effort to rigorous analytics to better understand the unified Universe.

This preamble is not arbitrary, nor trite, exactly because it encapsulates some critical philosophical concepts needed in order to mode shift the various facets of debate and argument. You can argue yourself blue, and be absolutely 100% logically correct, but if your argument does not close into alignment with the actual real unified Universe, then what’s the point? It is imperative that we ask the proper questions. That we do so from the precipice of the unified Universe. And if you are not on that precipice then you ask questions arising from a limited subset from the relam of c’s and one might argue oblivious to all that is. Remember, one of those c’s in that realm is the term: Close (e.g. close to unification).

Debating Within Science

There are many Blinded by Past Successes believe we must keep pressing forward with status quo thinking modeling reality. Frequent users here are probably quite tired of seeing this video but the message Feynman delivers can not be considered enough and from more than several points of view. Had Feynman read Susanne K Langer‘s book and integrated it into his thinking and has been aware of growing knowledge base on Systems Engineering, (whose formal organization INCOSE was established only two years after his passing), it is quite possible he would have come up with Elegant Reasonism many decades ago. He was ever so close to the answer. Rhetorically consider the circumstance where theory A and B discussed here both have all consequences exactly the same, and both agree with experiment, with the single exception that one does not close to unification and the other does. Which EIM derived theory are you going to work with?



Rhetorically the question then becomes what happens to the philosophical structure of argument (and debate) if the foundational context is different between participants? How do they gain common ground in order to truly understand the factors involved in the debate? What happens if the one side does not even recognize the other party’s contextual worldview is different from their own? Misunderstanding and incongruent action will ensue. Wars have begun for less.


Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Argument is a central concept for philosophy. Philosophers rely heavily on arguments to justify claims, and these practices have been motivating reflections on what arguments and argumentation are for millennia. Moreover, argumentative practices are also pervasive elsewhere; they permeate scientific inquiry, legal procedures, education, and political institutions. The study of argumentation is an inter-disciplinary field of inquiry, involving philosophers, language theorists, legal scholars, cognitive scientists, computer scientists, and political scientists, among many others. This entry provides an overview of the literature on argumentation drawing primarily on philosophical sources, but also engaging extensively with relevant sources from other disciplines.


Arguments come in many kinds. In some of them, the truth of the premises is supposed to guarantee the truth of the conclusion, and these are known as deductive arguments. In others, the truth of the premises should make the truth of the conclusion more likely while not ensuring complete certainty; two well-known classes of such arguments are inductive and abductive arguments (a distinction introduced by Peirce, see entry on C.S. Peirce). Unlike deduction, induction and abduction are thought to be ampliative: the conclusion goes beyond what is (logically) contained in the premises. Moreover, a type of argument that features prominently across different philosophical traditions, and yet does not fit neatly into any of the categories so far discussed, are analogical arguments. These four kinds of arguments are presented below. The section closes with a discussion of fallacious arguments, that is, arguments that seem legitimate and “good”, but in fact are not.

What happens to arguments when the foundational source of truth shifts as a function of interpretative context? The answer is so do the conclusions you may or may not have drawn. The entire argument along with those previous conclusions must be mode shifted in order for them to be in context of truth as a function of the unified Universe. Patterns and relationships change as a function of the EIM making them manifest. Those changes alter the perception of what constitutes reality. For example, individuals whose worldview is fundamentally based on M1 or M2, may perhaps tell you that humans are just energy, and they may cite Einstein’s famous equation as if they knew how to connect that to what makes them up physiologically. This is where mode shifting context illuminates proper use of nouns, verbs, adjectives, and other parts of speech. It demands full clarification of terms like “fields” by answering “of what, exactly” questions. A prime example of these arguments is presented by


Valid deductive arguments are those where the truth of the premises necessitates the truth of the conclusion: the conclusion cannot but be true if the premises are true. Arguments having this property are said to be deductively valid. A valid argument whose premises are also true is said to be sound. Examples of valid deductive arguments are the familiar syllogisms, such as:

All humans are living beings. All living beings are mortal. Therefore, all humans are mortal.

In a deductively valid argument, the conclusion will be true in all situations where the premises are true, with no exceptions. A slightly more technical gloss of this idea goes as follows: in all possible worlds where the premises hold, the conclusion will also hold. This means that, if I know the premises of a deductively valid argument to be true of a given situation, then I can conclude with absolute certainty that the conclusion is also true of that situation. An important property typically associated with deductive arguments (but with exceptions, such as in relevant logic), and which differentiates them from inductive and abductive arguments, is the property of monotonicity: if premises A and B deductively imply conclusion C, then the addition of any arbitrary premise D will not invalidate the argument. In other words, if the argument “A and B; therefore C” is deductively valid, then the argument “A, B and D; therefore C” is equally deductively valid.

What is perhaps insidious about encapsulation are the various ways which manifest interpretative boundaries establishing intrinsic context. You have perhaps read the above deductive reasoning many times, but the traditional assumption ignores how interpretative context mode shifts EIM to EIM. Relationships and patterns are different EIM to EIM and because of that, so are logical conclusions within each distinct EIM. Said another way is that one can not expect something that is true under one EIM to necessarily be true under a different EIM. Proof of that truth must be mode shifted and demonstrated. At a high level that’s what we tried to do in our original systems review notes. Recgnized base line EIMs are enumerated M0, M1, …, through M7. Iterations of each also exist and carry some designation which is relevant to each investigative team that documents to standard how various aspects, facets, parameters, etc, have been tweaked. The bottom line is you can change anything you want, but you have to declare your iteration when you do, exactly because you can not presume to know how systemic relationships will flow across the entire entanglement gradient in both emergent and convergent vectors. To believe you can make those assumptions constitutes arrogance.

What is perhaps new to many are systems engineering principles from information sciences concerning logical views of real systems in context of encapsulation as it applies to interpretative context. Take the above discussion but then mode shift it across seven different EIMs where each has an intrinsically different manifestation of interpretive context. What Elegant Reasonism brings to these discussions is juxtaposition of a plurality of EIMs consequently subjected to intensely rigorous analytics with a goal and objective to discern truth as a function of the real unified Universe which therein is the final arbiter as litmus. Historically humanity has not been situationally aware of the implications nor ramifications of LEEs Empiricism Trap, much less what arms and triggers that trap. Part of the point here is that it can not be presumed that fundamental context does not change in the above arguments. One must prove by illuminated mode shifting constextual states across the plurality of EIMs employed of those contextual dynamics. The logical conclusion may be the same, but what you will come away with is why it is true, not just that it is true, and that truth will be relative to the unified Universe not EIM derived context, though that may also be true.

Einstein on Problems & Thinking
Albert Einstein on problem solving

For example, the core constructs of EIMs M1 and M2 establish relatinoships and patterns such that they preclude ever accomplishing unification. One might ask why they would do that, but that question is not fair and it is not fair because that was not the problem they were attempting to solve when those EIMs were created in the first place. Back in the day, Albert Michelson and Edward Morley were conducting an experiment employing a device called an interferometer to detect the interstellar medium because everyone then believed there was one and they called it the Luminiferous Aether. Hypothetically it was this perfectly clear, perfectly viscous material in which everything real was immersed. Rather famously what they wound up proving was that medium did not exist. Everyone was shocked. Perhaps most of all a young patent clerk working in Switzerland at the time named Albert Einstein. The problem Einstein sought to reconcile was why the interferometers always reported the same values for the speed of light no matter the circumstance. That situation was the genesis of Special and General Relativity. Unification was not part of that effort in any way shape or form. In subsequent years he devoted considerable thought toward pigeonholing unification into relativity when what he should have been doing was the otherway around, but he was firmly ensnared inside LEEs Empiricism Trap and was so convinced he wasn’t even aware he needed to be looking for any exit. Neither was anyone during Einstein‘s life, aware of advanced information sciences. A few notable potential candidates being Alan Turing, or von Neuman. The professional International Council On Systems Engineering (INCOSE) was not formed until 1990, some 85 years after Einstein published his papers. Interferometer results combined with Einstein‘s views on relativity which were subsequently experimentally validated; created a particular point of view which constituted enough success to blind pretty much everyone to the path which needed to be followed.


Inductive arguments are arguments where observations about past instances and regularities lead to conclusions about future instances and general principles. Part of the analytical design of Elegant Reasonism integrates Bayesian Analytics for exactly these reasons. A traditional example might make an observation that the sun has risen in the east every single day until now leads to the conclusion that it will rise in the east tomorrow, and to the general principle “the sun always rises in the east”. Generally speaking, inductive arguments are based on statistical frequencies, which then lead to generalizations beyond the sample of cases initially under consideration: from the observed to the unobserved. In a good, i.e., cogent, inductive argument, the truth of the premises provides some degree of support for the truth of the conclusion. In contrast with a deductively valid argument, in an inductive argument the degree of support will never be maximal, as there is always the possibility of the conclusion being false given the truth of the premises. A gloss in terms of possible worlds might be that, while in a deductively valid argument the conclusion will hold in all possible worlds where the premises hold, in a good inductive argument the conclusion will hold in a significant proportion of the possible worlds where the premises hold. The proportion of such worlds may give a measure of the strength of support of the premises for the conclusion (see entry on inductive logic). The point might be made that the average life span of a human is different than the life span of the star around which our planet orbits, but that star too has a lifespan making the aforementioned metaphor a problematic function of scale. A fully compliant investigation, done to ISO 9001 QMS standards would have to mode shift all of that.

Inductive arguments have been recognized and used in science and elsewhere for millennia. The concept of induction (epagoge in Greek) was understood by Aristotle as a progression from particulars to a universal, and figured prominently both in his conception of the scientific method and in dialectical practices (see Stanford’s entry on Aristotle’s logic, section 3.1). However, a deductivist conception of the scientific method remained overall more influential in Aristotelian traditions, inspired by the theory of scientific demonstration of the Posterior Analytics. It is only with the so-called “scientific revolution” of the early modern period that experiments and observation of individual cases became one of the pillars of scientific methodology, a transition that is strongly associated with the figure of Francis Bacon (1561–1626; see entry on Francis Bacon).

Part of the reason the utility processes of Elegant Reasonism require a historical review is exactly for the integration of these types of historical thinking. The Bayesian Analytics will then capture the before, during, and after components of subsequently applied logic, inclusing those encapsulated by various EIMs in order to document how changes in context affect conclusions in Treatise that aligns with the unified Universe.


An abductive argument is one where, from the observation of a few relevant facts, a conclusion is drawn as to what could possibly explain the occurrence of these facts (see entry on abduction). Abduction is widely thought to be ubiquitous both in science and in everyday life, as well as in other specific domains of discourse such as the law, medical diagnosis, and explainable artificial intelligence (Josephson & Josephson 1994). Indeed, a good example of abduction is the closing argument by a prosecutor in a court of law who, after summarizing the available evidence, concludes that the most plausible explanation for it is that the defendant must have committed the crime they are accused of. The presumption being that the presented arguments converge on the defendant’s guilt to the satisfaction of the jury and court.

Like induction, and unlike deduction, abduction is not necessarily truth-preserving: in the example above, it is still possible that the defendant is not guilty after all, and that some other, unexpected phenomena caused the evidence to emerge. But abduction is significantly different from induction in that it does not only concern the generalization of prior observation (i.e. Bayesian ‘before) for prediction (though it may also involve statistical data): rather, abduction is often backward-looking (e.g. hindsight) in that it seeks to explain something that has already happened (e.g. rationalize). The key notion is that of bringing together apparently independent phenomena or events as explanatorily and/or causally connected to each other, something that is absent from a purely inductive argument that only appeals to observed frequencies. Cognitively, abduction taps into the well-known human tendency to seek (causal) explanations for phenomena (Keil 2006).

We’ll pause briefly from Standford’s language on abduction because we ran into the term “causally” (e.g. causality). The concept of cauality implies a continuum, under M1 or M2, generally construed to be a function of time; which under those EIMs can be viewed as manifesting that continuum. The cogent description of M5 however, paints a different tapestry, one that eliminates that continuum in favor of asynchronous action driven Event Frames. See action principles within concept sieve: EMCS01. The abstract civilization has labeled ‘time’ is defined differently under different EIMs and consequently those definitions must be recognized. Arguments abductively dependent on time must illuminate to illustration that the definitional distinctions to not change the conclusions being asserted.

As noted, deduction and induction have been recognized as important classes of arguments for millennia; the concept of abduction is by comparison a latecomer. It is important to notice though that explanatory arguments as such are not latecomers; indeed, Aristotle’s very conception of scientific demonstration is based on the concept of explaining causes (see entry on Aristotle). What is recent is the conceptualization of abduction as a special class of arguments, and the term itself. The term was introduced by Peirce as a third class of inferences distinct from deduction and induction: for Peirce, abduction is understood as the process of forming explanatory hypotheses, thus leading to new ideas and concepts (whereas for him deduction and induction could not lead to new ideas or theories; see the entry on Peirce). Thus seen, abduction pertains to contexts of discovery, in which case it is not clear that it corresponds to instances of arguments, properly speaking. In its modern meaning, however, abduction pertains to contexts of justification, and thus to speak of abductive arguments becomes appropriate. An abductive argument is now typically understood as an inference to the best explanation (Lipton 1971 [2003]), although some authors contend that there are good reasons to distinguish the two concepts (Campos 2011).

While the main ideas behind abduction may seem simple enough, cashing out more precisely how exactly abduction works is a complex matter (see Stanford’s entry on abduction). Moreover, it is not clear that abductive arguments are always or even generally reliable and cogent. Humans seem to have a tendency to overshoot in their quest for causal explanations, and often look for simplicity where there is none to be found (Lombrozo 2007; but see Sober 2015 on the significance of parsimony in scientific reasoning). There are also a number of philosophical worries pertaining to the justification of abduction, especially in scientific contexts; one influential critique of abduction/inference to the best explanation is the one articulated by van Fraassen (Fraassen 1989). A frequent concern pertains to the connection between explanatory superiority and truth: are we entitled to conclude that the conclusion of an abductive argument is true solely on the basis of it being a good (or even the best) explanation for the phenomena in question? The presumption of course, is that the interpretative context made manifest by details of the argument can only be interpreted in one manner, and presuming that no Lager Epistemology Errors have been made anywhere. Something that circa 2023 is not very likely. It seems that no amount of philosophical a priori theorizing will provide justification for the leap from explanatory superiority to truth. Instead, defenders of abduction tend to offer empirical arguments showing that abduction tends to be a reliable rule of inference. In this sense, abduction and induction are comparable: they are widely used, grounded in very basic human cognitive tendencies, but they give rise to a number of difficult philosophical problems. Where this all gets sticky is when it can be shown that the arguments presented exist exclusively within the domain of LEEs Empiricism Trap and that another argument which closes unification consistent with the utility process and enabled through the technological framework epistemologically supporting truth as a function of the unified Universe as delivered by Elegant Reasonism.


Arguments by analogy are based on the idea that, if two things are similar, what is true of one of them is likely to be true of the other as well (see entry on analogy and analogical reasoning). Analogical arguments are widely used across different domains of human activity, for example in legal contexts (see entry on precedent and analogy in legal reasoning). As an example, take an argument for the wrongness of farming non-human animals for food consumption: if an alien species farmed humans for food, that would be wrong; so, by analogy, it is wrong for us humans to farm non-human animals for food. The general idea is captured in the following schema (adapted from the entry on analogy and analogical reasoning; S is the source domain and T the target domain of the analogy):

  1. S is similar to T in certain (known) respects.
  2. S has some further feature Q.
  3. Therefore, T also has the feature Q, or some feature Q* similar to Q.

The first premise establishes the analogy between two situations, objects, phenomena etc. The second premise states that the source domain has a given property. The conclusion is then that the target domain also has this property, or a suitable counterpart thereof. While informative, this schema does not differentiate between good and bad analogical arguments, and so does not offer much by way of explaining what grounds (good) analogical arguments. Indeed, contentious cases usually pertain to premise 1, and in particular to whether S and T are sufficiently similar in a way that is relevant for having or not having feature Q.

Analogical arguments are widely present in all known philosophical traditions, including three major ancient traditions: Greek, Chinese, and Indian (see Historical Supplement). Analogies abound in ancient Greek philosophical texts, for example in Plato’s dialogues. In the Gorgias, for instance, the knack of rhetoric is compared to pastry-baking—seductive but ultimately unhealthy—whereas philosophy would correspond to medicine—potentially painful and unpleasant but good for the soul/body (Irani 2017). Aristotle discussed analogy extensively in the Prior Analytics and in the Topics (see section 3.2 of the entry on analogy and analogical reasoning). In ancient Chinese philosophy, analogy occupies a very prominent position; indeed, it is perhaps the main form of argumentation for Chinese thinkers. Mohist thinkers were particularly interested in analogical arguments (see entries on logic and language in early Chinese philosophy, Mohism and the Mohist canons). In the Latin medieval tradition too analogy received sustained attention, in particular in the domains of logic, theology and metaphysics (see entry on medieval theories of analogy).

Einstein - Hubble meeting
Einstein looking through Hubble’s instrument
Expanded Stages of Grief
Expanded Stages of Grief

An analogy we often use relative to and respective of the interferometer experiments previously discussed relative to and respective of the conclusions which Einstein drew might arguably compare those devices to a firearm, but ask the same questions in order to see if the questions asked might need updating. This is a simplex analogy, but proceeds as follows. Interferometers all report the same velocity for the speed of light. Firearms taken to a range with 1000 rounds of the same ammunition and fired through a chronograph will essentially report the same bullet velocity. We do not impost externally derived dimensional nature to those bullets, so why do we do that for photons emitted from electrons?  Edwin P Hubble presented Albert Einstein with credible data that showed cosmological velocities for the speed of light does in fact shift either toward the infrared (e.g. red) or ultraviolet (e.g. blue) end of the visible light ends of the spectrum depending on the associated vector as measured either away or towards Earth. The Andromeda galaxy, for example is blue shifted, while many distant galaxies are red shifted. Einstein’s response was simple contradiction offered by interferometer empirical data. Subsequent rationalization by Alan Guth, et al, submitted the Inflationary Theory which simply stated that spacetime itself must be expanding at those rates and Einstein never said that construct could not exceed the speed of light. Never once in all the effort in all of those various arguments were Langer Epistemology Errors (LEEs) ever considered. They were not considered exactly because every single individual was deeply immersed within LEEs Empiricism Trap. Mode shifting all of this accomplishes and otherwise reconciles all of these various incongruities. Interferometer data and the firearm example data are all products of the system producing the construct considered. Astronomical & Cosmological distinctions as measured by Hubble are vindicated. The cogent description of M5 ultimately resulted in the original systems review developing reference frames, (e.g. Local Frames and Event Frames) to deal with the distinctions between locally measured constructs vs astronomical constructs. The constant ‘c’ term so often used across science mode shifts to The Emergence Model term called Severance. The velocity does not change, but the reason it is true does. Another aspect that changes is in how science considers mass.

Einstein Letter
Einstein stating his belief that mass is invariant.

The Emergence Model then mode shifts the concept of Rapidity (e.g. Concept Sieve EMCS01 Concept 0168) to properly be read as ‘velocity over Severance‘ and it should be noted here that Rapidity is infinite. Skipping considerable detail, this argument ultimately produced the unified Universe Bang to Bang. Part of the point here is much of the ensuing arugments that transpired during the original systems review were based on that analogy between interferometers and firearms. The constancy they made manifest was a function of the systems producing the product and not the medium through which their product passed. Rhetorically if nothing can go faster than the speed of light, hence those limits under EIMs M1, M2, and M3, then why is anyone, anywhere, under any circumstances allowed to square that value? Why is that act not a violation of its own rule? Pointing that out is not saying doing so is wrong. Rather it is making the case concerning the logical nature of the association and usage. Under Elegant Reasonism those EIMs are logical incarnations (e.g. logical relationships) and such are as a result perfectly fine operations to perform. Under M5 and M6 those limitation rules do not exist and the term’s definition has changed to better conform to the realm of c’s (no pun intended).

When one finally understands that ‘c’ mode shifts to mean Severance suddenly its usage makes a great deal more sense as to why the value has anything to do with all the other places that term appears in any equation. Rapidity being velocity over Severance even appears in how Einstein defines mass. The equation to the left here is essentially m0 over the square root of 1 minus Rapidity squared. Under The Emergence Model everything real is some complex composite configuration of MBPs whose intrinsic nature derives The Fundamental Entanglement Function, limited by Severance forming dynamic architectural mass generally construed to follow Knot Theory due to the manner in which those configurations manifest.

Analogical arguments continue to occupy a central position in philosophical discussions, and a number of the most prominent philosophical arguments of the last decades are analogical arguments, e.g., Jarvis Thomson’s violinist argument purportedly showing the permissibility of abortion (Thomson 1971), and Searle’s Chinese Room argument purportedly showing that computers cannot display real understanding (see entry on the Chinese Room argument). (Notice that these two arguments are often described as thought experiments [see entry on thought experiments], but thought experiments are often based on analogical principles when seeking to make a point that transcends the thought experiment as such.) The Achilles’ heel of analogical arguments can be illustrated by these two examples: both arguments have been criticized on the grounds that the purported similarity between the source and the target domains is not sufficient to extrapolate the property of the source domain (the permissibility of disconnecting from the violinist; the absence of understanding in the Chinese room) to the target domain (abortion; digital computers and artificial intelligence). It is relevant but beyond the scope of this page to discuss how Knowledge Management (KM) immersed in the context of Elegant Reasonism treats data, information, knowledge, people and artificial intelligence.

In sum, while analogical arguments in general perhaps confer a lesser degree of conviction than the other three kinds of arguments discussed, they are widely used both in professional circles and in everyday life. They have rightly attracted a fair amount of attention from scholars in different disciplines, and remain an important object of study (see entry on analogy and analogical reasoning).


One of the most extensively studied types of arguments throughout the centuries are, perhaps surprisingly, arguments that appear legitimate but are not, known as fallacious arguments. From early on, the investigation of such arguments occupied a prominent position in Aristotelian logical traditions, inspired in particular by his book Sophistical Refutations (see Historical Supplement). The thought is that, to argue well, it is not sufficient to be able to produce and recognize good arguments; it is equally (or perhaps even more) important to be able to recognize bad arguments by others, and to avoid producing bad arguments oneself. This is particularly true of the tricky cases, namely arguments that appear legitimate but are not, i.e., fallacies. The wall we immediately run into here is interpretative context criteria and metrics. We would argue that if one can place an assertion out into a pool accessible to all of civilization and if it can survive that then perhaps degree to which it might be considered fellacious can be assessed. To that end we mode shifted what many scientists call The Baloney Detection Kit:

Some well-know types of fallacies include (see entry on fallacies for a more extensive discussion):

  • The fallacy of equivocation, which occurs when an arguer exploits the ambiguity of a term or phrase which has occurred at least twice in an argument to draw an unwarranted conclusion.
  • The fallacy of begging the question, when one of the premises and the conclusion of an argument are the same proposition, but differently formulated.
  • The fallacy of appeal to authority, when a claim is supported by reference to an authority instead of offering reasons to support it.
  • The ad hominem fallacy, which involves bringing negative aspects of an arguer, or their situation, to argue against the view they are advancing.
  • The fallacy of faulty analogy, when an analogy is used as an argument but there is not sufficient relevant similarity between the source domain and the target domain (as discussed above).

Beyond their (presumed?) usefulness in teaching argumentative skills, the literature on fallacies raises a number of important philosophical discussions, such as: What determines when an argument is fallacious or rather a legitimate argument? (See section 4.3 below on Bayesian accounts of fallacies) What causes certain arguments to be fallacious? Is the focus on fallacies a useful approach to arguments at all? (Massey 1981) Despite the occasional criticism, the concept of fallacies remains central in the study of arguments and argumentation.

Epistemological Source of Truth

Obfuscation of truth occurs so often in argument and a common tactic is to confuse the epistemological source of truth relative to the subjects being discussed. It is advantageous not to forget from whence the source of truth came (see epistemology list early on this page). We consider Elegant Reasonism a superset epistemology because it integrates all other epistemologies but, and it is a rather large consideration, they are statistically weighted relative to and respective of their ability to credibly manifest everything real (e.g. reflect the unified Universe) and accomplish that simple to the point of elegance.

Argumentation and Bias

Stanford points out that just as there are different types of arguments, there are different types of argumentative situations, depending on the communicative goals of the persons involved and background conditions. Argumentation may occur when people are trying to reach consensus in a situation of dissent, but it may also occur when scientists discuss their findings with each other (to name but two examples). Specific rules of argumentative engagement may vary depending on these different types of argumentation. A related point extensively discussed in the recent literature pertains to the function(s) of argumentation. What’s the point of arguing? While it is often recognized that argumentation may have multiple functions, different authors tend to emphasize specific functions for argumentation at the expense of others. This section offers an overview of discussions on types of argumentation and its functions, demonstrating that argumentation is a multifaceted phenomenon that has different applications in different circumstances.

It will be interesting to watch how the science of arugments evolves as critical situationally aware thinking (CSAT) gaines cognition across civilization.


Do not assume that everyone looking exactly the same data will draw the same conclusions. Thomas Jefferson, penning the United States Constitution, was pondering the question of what constituted an appropriate ‘peer’ when he was developing the concept of a “jury of your peers’. He noted that if you randomly pick an indiidual and assign them an IQ of 1 then a group of similar individuals will average toward that quotient. He further noted that if you inject what he called a ‘loud mouthed dullard’ into that group that it would lower the overall quotient value. Consequently there is much debate, even today, on what the framers meant by the term ‘peer’. Rhetorically we point out the metaphorical phrase “birds of a feather flock together” gathers whole new implications and ramifications if the intrinsic context drawing that flock together changes. If the attractive force is epistemological based then the source of truth becomes pertinent.

In modern society what would you think the chance would be for a highly technologically advanced aircraft to run out of fuel half way across the Atlantic Ocean? In the dead of the night, halfway over the Atlantic ocean, an Airbus A330 carrying 306 passengers and crew suddenly runs out of fuel. The lights go out, the oxygen masks drop, and the noise of the engines is replaced with an eerie silence. In pitch darkness, the giant aircraft begins drifting down towards the ocean below. The pilots are stunned, and begin trying to glide the aircraft as far as possible. The flight attendants prepare the passengers for an emergency ditching in the ocean – something which is likely be a death sentence for all on board. This is a living nightmare, but incredibly – it never had to happen in the first place. Many accounts of this incident focus on its technical aspects, but that is only half of the story. The truth is that at bottom, this is a fascinating story about human psychology. It’s a story about how people make decisions under pressure, when faced with ambiguous information. This is not just the story of Air Transat flight 236 but of the biases we all bring to interpreting the data we experience as humans.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Cognitive Bias

Framing Bias

Executive Summary

The nature of arguments is about to undergo tectonic shift in terms of the factors required in dynamic execution of debate. In many ways Elegant Reasonism represents something of a marshmallow that you can’t quite get your arms around because the source of truth it represents literally encompasses everything real. Not just what you think is real, everything that is real, and it does not stop there because it credibly integrates the virtual as well (e.g. what you thought was real). Logically correct views are easily dealt with as are epistemological differences resulting from different sources of truth. Even what constitutes truth is defined as a function of derivation by the unified Universe. The science of arguing on simple epistemological basis just became necessary but insufficient to win the day. It will be interesting to watch how all this unfolds in the coming months and years across the landscape of civilization. Arguments facilitated by our original systems review span all these various types and situations is interesting but compelling is their results all dovetail together ultimately accomplishing unification.

We look forward to your mode shifted insights!!  Sic’em.


Shop Now


#ElegantReasonism #EmergenceModel #Unification #Debate #Mathematics #Proof


By Charles McGowen

Charles C McGowen is a strategic business consultant. He studied Aerospace Engineering at Auburn University '76-'78. IBM hired him early in '79 where he worked until 2003. He is now Chairman & CEO of SolREI, Inc. ORCID: