Are All Experiments Actually Quasi-Experiments?
Valid experiments depend on randomisation and other fundamentals. Are these even theoretically possible never mind practically?
Randomised Controlled Trials (RCTs) are the gold standard for scientific experimentation. The theory is that a truly random process (not influenced by any external features of the environment) enables us to produce a representative sample of the population upon which to experiment. Given this sample is truly representative with no bias introduced via the sampling process, we are able to progress with further experiment design to aim for objective results regarding efficacy and causality.
RCTs are fundamentally based upon the ability to generate a truly random sampling process for your target population. This is baked into the definition of an experiment. If samples are not randomly allocated, then a study is definitionally a quasi-experiment. On investigating if experiments are possible, one of the first questions to ask is “is random sampling possible?”. Not just is it practical, and feasible for scientists executing experiments, but is it even theoretically possible within the universe? The existence of objective randomness has been debated for millennia, underpinning belief in free will or deterministic fate.
Theoretical physics, being the study of our universe at the most fundamental level, has been particularly challenged by this philosophical debate. Many physicists have tried and failed to objectively prove randomness or determinism one way or another via logic and experiment.
The model of classical physics suggests the Computational Universe Theory under which no process is truly random. Classical physics models the universe as one with fundamental laws of nature. The progression of the universe and time, is the unfolding of the application of these laws. These laws of nature are theoretically discoverable. In this model of the universe, if we know all laws of nature, we can systematically determine any outcome. Nothing would be theoretically unknowable. Therefore, nothing would be objectively random. Randomness is a subjective illusion. Whenever something appears random, it means we simply do not have the appropriate theoretical or experimental framework in place to account for the systematic laws generating the observed behaviour.
This provides the basis for the story of scientific progression. Where once the whole world appeared random and ungovernable, or under the governance of supernatural Gods; the scientific revolution brought us faith that we could master the universe via the scientific method. As our mastery has increased with scientific revolutions, we have gained greater and greater technological power, making what was previously observed as random, now explainable and therefore predictable and controllable.
The field of quantum mechanics has split the physics community on their belief in a stochastic or deterministic universe. In quantum physics, phenomena appear random and probabilistic, but also bring up numerous paradoxes. On top of these paradoxes we are also yet to rectify it with general relativity, highlighting the incompleteness of our understanding. Physicists have many quantum theories, speculating on its fundamental basis:
- The Copenhagen interpretation of quantum mechanics states that the universe is intrinsically non-deterministic. Truly random and probabilistic. This theory also believes that quantum descriptions are truly objective; completely “independent of physicists’ mental arbitrariness”. The Copenhagen interpretation has strong challengers. They show the subjective need for an observer, and the need for a measuring device premised upon “non-objective” or at least unaccounted for classical mechanics.
- The relational quantum mechanics interpretation claims quantum descriptions are not truly objective, but are observer-dependent. This is based on the idea underlying special relativity, of an observation being dependent upon the reference frame of the observer.
- Hidden-variable theories believe that quantum mechanics is a useful, but incomplete model of the universe. Some of these theories, like Bohm’s theory, believe that a hidden deterministic principle is driving observed perceptually probabilistic phenomena. Some theories like stochastic quantum mechanics believe the hidden driver behind quantum mechanics’ expression is itself random. In stochastic quantum mechanics this driver is stochastic fluctuations in spacetime.
- The many worlds interpretation of quantum mechanics states that all possible outcomes do deterministically occur, but not all within the universe we inhabit. Some claim this theory is irrelevant as it is untestable, since we cannot observe what occurs in other universes. Others claim we could make this testable versus the Copenhagen interpretation, by placing macroscopic objects in a coherent superposition and interfering with them. This experiment is currently beyond our capabilities.
- The transactional interpretation of quantum mechanics builds on the Wheeler–Feynman absorber theory, stating that electromagnetic waves travel both forwards and backwards in time. This theory breaks away from stochasticism vs determinism, by instead suggesting retrocausality.
It is clear, there is no consensus, nor logical or empirical proof as to whether true randomness is possible in the universe. Even worse, the relational interpretation challenges objectivity and the transactional interpretation challenges causality. More fundamentals of experimental design are under challenge than simply random allocation alone.
Given our current state of our knowledge of the universe we must simply take a leap of faith one direction or another. Choose randomness versus determinism, and even causality versus retrocausality. Progress in philosophy and especially epistemology has shown we can never prove determinism nor randomness. To try to do so would come under the frame of induction, which has been shown to lack a strong logical foundation.
Our approach to determining the objective status of the result of an RCT falls into the Münchhausen trilemma, as any attempt at objective proof does. Requiring we rest on a dogmatic argument, a regressive argument or a circular argument; regarding the ability to truly randomly assign samples and more. Axiomatic statements have been proven to be completely logically inconsistent and pseudo-scientific.
If we cannot even prove the objectivity of our most isolated fundamentals, we need a heaping of humility when it comes to claiming objectivity of our experiments. Our macroscopic experiments have many more interacting dynamics and all theoretical experiments are conducted within a culturally determined paradigm.
RCTs and the scientific method may not have a proven objective basis but they are undeniably useful. All models are wrong, some are useful. Humanity is fundamentally limited to working with wrong but increasingly useful models of the universe. Some of these models are based on our scientific methodology. A model for how to discover new models. We are unable to arrive at the truly objective. We cannot infinitely de-risk our actions. We must take a leap of faith to some degree. Our improving scientific methodologies continually lower what we consider acceptable levels of faith to act upon. We choose a preferred frame, but ensure we shift it towards objectivity bit by bit.
Our current status quo can be to dogmatically state that true randomness is possible under practised experimental design. If we wish to improve our scientific methodology, we must recognise that this dogmatic claim does not have a fundamentally proven basis. We must recognise the quasi-experimental nature of our current scientific methodologies. If we wish to remain open to future scientific revolutions, we must be willing to step out of our current paradigm. I believe we should recognise our fallibilism and embrace the joy of venturing into the infinite unknown of scientific methodological progression.