Understanding the Planck constant
How the quantum of action defines physical existence
For centuries, our understanding of the universe has been based on the idea of entities and processes evolving in space and time. However, this framework is beginning to show its limitations, forcing us to reconsider some assumptions we’ve long taken for granted.
But can this be done without having to rely on the wild speculations of modern physics? Yes, there’s an alternative.
It seems that everything we physically experience could arise from a fundamental duality in nature: Every energy or information exchange we perceive as propagating through spacetime at the speed of light would be, at the same time, an instant interaction relating two spacetime events through the light speed ratio. These instant connections would stitch reality from past to present, linking the actual configuration of the universe to all its past states throughout history, conforming a consistent picture of reality across all times and places.
The speed of light would be instantaneity in disguise, underlying quantum entanglement, which, in turn, would be the fundamental process that supports the structure and behavior of all physical phenomena.
But while this idea holds intriguing promise, some questions still remain unanswered. For instance, the significance of a persistent quantity that sustains every physical manifestation we perceive:
The Planck constant
h = 6.62607015e−34 (J/Hz) Planck constant
At the turn of the last century, physicists were still trying to understand how light interacts with matter. Something was wrong with the equations used at the time, since they led to the ultraviolet catastrophe, predicting much more energy than was observed at short wavelengths in black-body radiation experiments.
To solve the problem, Max Planck proposed the idea of discrete energy levels, and shortly thereafter Einstein extended this concept to provide a compelling explanation for how materials absorb and release energy, sparking the quantum revolution.
Since then, the Planck constant (also known as the quantum of action) has become so ubiquitous that it’s now considered a universal constant, with units of Joule/Hertz, meaning energy per cycle of oscillation.
The existence of the Planck constant once again shook the foundations of classical physics, as it clearly pointed out that energy is not continuous but quantized. Every energy or information exchange must take place in discrete steps of h, the smallest possible transaction for an observer.
But why the Planck constant? Why does the same value represent the minimum portion of existence or change to all observers? Is it an inherent quality of every interaction that unfolds, as it happens with the speed of light, or does it depend on the subjective parameters that define the observer’s reference frame, like their notions of rest or time flow? Let’s try to find out.
Planck units and the Planck scale
While working on the black-body problem, Planck devised an ingenious system of units that combined the known universal constants in different ways, aiming to uncover the most fundamental representation for each aspect of reality.
Planck units stand out as the most basic building blocks that make physical sense, so they’re often considered the “smallest units” possible, but this isn’t quite right. Rather, they’re expressions that neatly encode some knowledge about reality, telling us in which proportions the universal constants define the physical framework perceived by the observers.
For instance, a clear understanding of the concept of mass can be achieved by expressing it in terms of two other, more fundamental Planck units (Planck density and Planck volume):
mp = dp vp Planck mass is Planck density in Planck volume
mp = dp lp³ (Kg/m³ m³) = (Kg)
Planck mass is the only value of energy density that allows an observer to perceive matter or energy within a region the size of a Planck volume in its reference frame.
So Planck mass defines the highest concentration of energy an observer can conceive, but always tied to the smallest notion of volume it can grasp. It simultaneously defines the upper limit for density and the lower limit for volume, setting the notion of maximum energy confinement for the observer. This explains why most particles are well below Planck mass, since both their density and volume are far from these limits. Particles are just patterns of interactions that emerge in certain regions, fulfilling the right size and density conditions so that from the interactions we are able to detect, we deduce those particles are there.
Then, more than an intrinsic quality of matter, the concept of mass describes how the observer and the observed relate through spacetime, and this fits nicely with our new understanding of reality as the patterns that the fundamental interactions create, since this mechanism perfectly quantifies how energy density (or the conditions set by interactions through spacetime) constrains how other interactions should unfold.
In our new interpretation of reality, there are no intrinsic properties, and units and measurements don’t describe isolated elements, but the ratios and constraints interactions fulfill across spacetime, shaping all physical phenomena. This implies that photons don’t have intrinsic frequencies and particles don’t have intrinsic masses, independent of the paths that information takes to reach the observer. Reference units and the phenomena they measure aren’t set one after the other but at once, and all the internal and external interactions that compose and reach an entity contribute to shape its sense of reality.
Planck units and the universal constants
Planck units, whether absolute or relative, still provide the cleanest way to analyze the universe, since they define the minimum set of relations interactions must fulfill to shape all other concepts observers use to explain reality. Yet, the abstract meaning of the universal constants prevents a deeper understanding of the universe.
So, to explore the meaning of these constants from the perspective of a material observer, we’ll turn Planck units on their head, and express the universal constants in terms of the Planck units, trying to understand why our intuitive notions of length, time and mass (the three aspects of reality we consider truly fundamental) must always relate in the specific ways that yield the universal constants.
To do this, we’ll leave aside other Planck units that mostly describe statistical, emergent or derived quantities, and we’ll use the concept of mass only as a means to describe how energy density constrains the interactions that take place in a given region. We’ll also consider charge (any kind of charge) or spin to be emergent properties of how interactions shape the internal structure of the different particles.
So let’s get started.
In the Planck system of units, length, time and mass are defined by three universal constants: ħ (the reduced Planck constant or Dirac constant, which is Planck constant over 2π to use radians per second instead of Hertz), c (the speed of light) and G (the gravitational constant):
Planck units for length, time and mass
lp = √(ℏG/c³) (m) Planck length equation
tp = √(ℏG/c⁵) (s) Planck time equation
mp = √(ℏc/G) (Kg) Planck mass equation
ℏ = h/2π Because 1 Hz = 2π rad/s
ℏ = 1.054571817e−34 (J·s) Dirac constant (radians are dimensionless)
Here, we have three expressions that represent three physical units in terms of three fundamental constants. Then, we can easily turn things around, and get the expressions for the universal constants in terms of the Planck units, to show them as the constrains that define how our fundamental concepts must relate to each other at all times.
First, let’s get rid of the square root in the equations:
lp² = ℏG/c³ (m²) Planck length squared
tp² = ℏG/c⁵ (s²) Planck time squared
mp² = ℏc/G (Kg²) Planck mass squared
Now, let’s derive how the speed of light appears when written in Planck units. Considering that speed is distance over time, we only have to divide the expression for Planck length by the expression for Planck time.
While this might seem like a circular argument (since the Planck units were first conceived from the universal constants) the key insight here is that we should stop treating c or any other constant as a postcondition imposed on phenomena or spacetime, but rather as the precondition that allows their interplay at once:
The speed of light in Planck units
lp²/tp² = ℏG/c³ / ℏG/c⁵ Replace lp² by ℏG/c³ and tp² by ℏG/c⁵
lp²/tp² = ℏGc⁵ / ℏGc³ Reorder
lp²/tp² = c⁵ / c³ = c² Simplify
lp/tp = c Square root of both sides
c = lp/tp The speed of light = 299792458 m/s
- The speed of light describes the relation that the concepts of distance and duration must always obey for an observer. At the macroscopic level, this relation can be expressed in many ways depending on the units, but at the fundamental level, a discrete step in length always implies a discrete step in time (and vice versa), so all physical phenomena internally abide by this relation. We already knew c is the only spacetime ratio that allows an observer to understand that an instant interaction took place, and given that no phenomenon can be understood until all its defining interactions are detected, this means that no observer can ever perceive anything physical at rates or speeds greater than c, since entities and processes are nothing more than averages of two or more successive interactions.
Now, deriving G or h in Planck units is a bit more complex. We have to combine Planck length or Planck time with Planck mass in a system of equations. After some simple math, we can isolate ħ and solve for G. This yields two equivalent expressions that ultimately lead to a single expression depicting G only in terms of Planck density and Planck time:
The gravitational constant in Planck units
mp² = ℏc/G Planck mass squared
ℏ = mp²G/c Dirac constant from Planck mass squared
lp² = ℏG/c³ Planck length squared
lp² = mp²G/c G / c³ Replace ℏ by mp²G/c
lp² = mp²G² / c⁴ Simplify
lp = mp G / c² Square root of both sides
G = lp c² / mp Reorder and isolate G
Gravitational constant in different equivalent forms
G = lp c² / mp (m m²/s² / Kg) = (m³/Kg/s²) = (m/J) = (N·m²/Kg²)
G = tp c³ / mp (s m³/s³ / Kg) = (m³/Kg/s²) = (m/J) = (N·m²/Kg²)
G = lp³/tp² / dp·lp³ Both expressions simplify to this one
G = 1 / dp·tp² Gravitational constant = 6.6743e-11 N·m²/Kg²
- The gravitational constant describes the relation that the concepts of density and duration must always obey for an observer. When we express G in terms of the Planck units, we get a simple equation (similar to c) that relates density and time instead of length and time, but where c showed length is directly proportional to time, G shows density is inversely proportional to time squared. Just like c tied spacetime to motion, G links spacetime to density, showing that higher energy density (whether through a reduction of volume or an increased energy content) leads to shorter duration. This shows the deep relationship between mass, density, acceleration, gravity or time dilation while, together, both c and G modulate the rates and proportions at which different phenomena manifest to observers.
Now, to find out how the Planck constant (or the Dirac constant) appear in Planck units, we solve the same system as before, but isolating G and solving for ħ. Again, we can choose whether to combine the mass equation with the length or time equation, obtaining different expressions that convey the same meaning, which can be further simplified to a single equation expressing the Planck or Dirac constant only in terms of Planck length, time and density:
The Planck constant in Planck units
mp² = ℏc/G Planck mass squared
G = ℏc/mp² Gravitational constant from Planck mass squared
lp² = ℏG/c³ Planck length squared
lp² = ℏ ℏc/mp² / c³ Replace G by ℏc/mp²
lp² = ℏ² / mp²·c² Simplify
lp = ℏ / mp·c Square root of both sides
ℏ = lp mp c Reorder and isolate ℏ
h = 2Π lp mp c Replace ℏ by h/2Π and isolate h
Dirac constant in different equivalent forms
ℏ = lp mp c (m Kg m/s) = (Kg·m²/s² s) = (Kg·m²/s) = (J·s)
ℏ = tp mp c² (Kg m²/s² s) = (Kg·m²/s² s) = (Kg·m²/s) = (J·s)
ℏ = dp·lp³ lp²/tp Both expressions simplify to this one
ℏ = dp lp⁵ / tp Dirac constant (Kg/m³/s m⁵) = (J·s)
Planck constant in different equivalent forms
h = 2Π lp mp c (rad m Kg m/s) = (rad Kg·m²/s² s/rad) = (J/Hz)
h = 2Π tp mp c² (rad Kg m²/s² s) = (rad Kg m²/s² s/rad) = (J/Hz)
h = 2Π dp lp⁵ / tp Planck constant (rad Kg/m³/s² m⁵ s/rad) = (J/Hz)
- The Planck constant describes the relation that the concepts of density, distance and duration must always obey for an observer. By controlling the interplay between these three aspects at once, h appears more intricate and relevant than c or G alone, and we could think of these as partial or implicit relations that must also hold within h. The quantum of action seems to be the truly fundamental constant governing all aspects of physical presence and interaction in spacetime (angular orientation, length, density, volume, distance and duration), regulating what each observer perceives at any given time:
h = 2Π lp dp vp lp / tp Planck constant contains all aspects of reality
The Planck constant represents the minimum “threshold of existence” any entity, region or process must exceed to become real to another, reflecting the conditions interactions must meet to unfold between them. Since every physical manifestation requires at least one quantum of action, this means it must therefore involve at least one instance of each fundamental aspect implicit in h. So our notions of space, time and quantity aren’t truly independent, as they’re all interrelated within h, the only real quantization that exists.
Planck constant, energy and dynamics
In trying to understand the meaning of the fundamental constants, we’ve uncovered there’s only one possible quantization which all other physical relations must fulfill. So we’ll now illustrate how all the relevant energy and dynamic equations we use at the macroscopic level converge to the Planck or Dirac constant, revealing that the quantum of action, like entanglement, underpins everything that exists:
ħ is the relation energy and time must always obey for an observer
ℏ = mp c² tp Dirac constant
ℏ = mp·c² tp Tie c² to mp (mass-energy equivalence)
ℏ = Ep tp Replace mp·c² by Planck energy (Kg·m²/s²) = (J)
ħ is the relation momentum and length must always obey for an observer
ℏ = mp c² tp Dirac constant
ℏ = mp·c c·tp Tie one c to mp and another to tp
ℏ = pp lp/tp tp Replace mp·c by Planck momentum and c by lp/tp
ℏ = pp lp tp is multiplying and dividing so it cancels
h is the relation energy and frequency must always obey for an observer
h = 2Π mp c² tp Planck constant
h = 2Π mp·c² tp Tie c² to mp (mass-energy equivalence)
h = 2Π Ep tp Replace mp·c² by Planck energy (Kg·m²/s²) = (J)
Ep = h / 2Π·tp Isolate Ep
Ep = h wp / 2Π Replace 1/tp by Planck angular frequency (rad/s)
Ep = h fp Replace wp/2Π by Planck frequency (Hz)
Ep = h fp Planck energy-momentum relation
h = Ep / fp Energy depends on frequency by Planck constant
h is the relation energy and wavelength must always obey for an observer
h = 2Π lp mp c Planck constant
2Π lp = h / mp·c Reorder (Compton wavelength for a Planck mass)
2Π lp / c = h / mp·c² Divide the two sides by c
2Π lp / c = h / Ep Replace mp·c² by Planck energy (Kg·m²/s²) = (J)
Ep = h c / 2Π·lp Isolate Planck energy
Ep = h / 2Π·tp Replace c/lp by 1/tp
Ep = h wp / 2Π Replace 1/tp by Planck angular frequency (rad/s)
Ep = h fp Replace wp/2Π by Planck frequency (Hz)
ħ (or h) represents the limit for Newton’s law at the Planck scale
ℏ = lp mp c Dirac constant
ℏ = Ep tp Dirac constant as Energy·time
ℏ = ℏ ℏ is equal to itself
mp c lp = Ep tp Replace ℏ on each side with a different expression
mp c / tp = Ep / lp Move tp to one side, and lp to the other
mp ap = Ep / lp Replace c/tp by Planck acceleration (m/s²)
mp ap = Fp Replace Ep/lp by Planck force (J/m)
mp ap = Fp Newton's law of motion for a Planck mass
ħ (or G) represents the Schwarzschild metric at the Planck scale
G = lp c² / mp Gravitational constant
G mp / lp = c² Isolate c² in Gravitational constant
G mp / 2lp = c²/2 Divide both sides by 2
c²/2 = G mp / 2lp Schwarzschild solution for a Planck mass (m²/s²)
2lp = 2 G mp / c² Schwarzschild radius for a Planck mass (m)
The Dirac constant is the Schwarzschild metric at the Planck scale
mp² = ℏc/G Planck mass squared
G = ℏc/mp² Gravitational constant from Planck mass squared
c²/2 = ℏc/mp² mp / 2lp Replace G by ℏc/mp² in the Schwarzschild solution
c = ℏ/mp / lp Simplify
lp mp c = ℏ Isolate ℏ
ℏ = lp mp c Dirac constant is the Schwarzschild solution
2lp = 2 ℏc/mp² mp / c² Replace G by ℏc/mp² in the Schwarzschild radius
lp = ℏ/mp / c Simplify
lp mp c = ℏ Isolate ℏ
ℏ = lp mp c Dirac constant is the Schwarzschild radius
We can draw many conclusions from this set of relations at the Planck scale:
- Planck constant as uncertainty: h is a constant factor similar to c, but instead of denoting the spacetime ratio any instant interaction must satisfy to an observer, it accounts for the uncertainty any instant interaction represents to an observer. Planck constant is the minimum degree of granularity or resolution by which an observer can make sense of a single interaction in its reference frame, and each aspect it includes represents the minimum uncertainty associated to the physical magnitude it embodies.
- Heisenberg Uncertainty Principles: The Dirac constant sets the lower limits for all Heisenberg uncertainty principles, given that we can pair the internal factors within ħ as conjugate variables (like energy and time, or momentum and position) that are fourier transforms of each other by how their definitions codepend. We can arrange the internal components of ħ however we like, but one part will always compensate the other, so no partial aspect can get completely isolated.
- Mass and energy: We get that the Einstein mass-energy equivalence or the Compton wavelength of a Planck mass appear within Planck constant, depending on how we arrange the internal factors of h, so the equivalence between mass and energy is always implied, no matter the phenomena. Although we always talk about massless photons, Planck constant makes no difference, and a photon with Planck energy is equivalent to a matter wave of Planck mass.
- Energy-frequency relation: The Planck constant also sets the upper limit for the Planck energy-frequency relation, which states that the energy of a photon is directly proportional to its frequency. No entity or process we observe could ever exceed this limit, since that would mean one of the interactions that compose it breaks the Planck constraints set for our reference frame.
- Newtonian dynamics: If we set Planck constant equal to itself (to symbolize the equivalence between matter in motion and energy density in spacetime), we get the upper limit for Newton’s law of motion. At the Planck scale, Newton’s law asserts that only a Planck mass under Planck acceleration (from 0 to c in Planck time) would be equivalent to a photon traversing a Planck length, which means that the fact that matter can’t reach the speed of light exactly at any other scale is dictated by h, and present within Newton’s law.
- Cosmological implications: Writing ħ as a function of G (or vice versa) gives the Schwarzschild solution or the Schwarzschild radius for a black hole of Planck mass, so both ħ and G encode the location within the gravitational field of a Planck mass where escape velocity is c. Thus, these expressions also pivot around the Dirac constant, relating density to a specific value of gravitational potential by the ratio between mass and distance, and much like the energy-time or momentum-position relations, this density-distance relation suggests an uncertainty principle that should be fulfilled at all times.
Planck constant and cosmology
The fact that all energy and dynamical equations ultimately lead to the Planck constant suggests that all the back and forth between matter, energy and spacetime is related to the constraints ħ imposes to interactions. These constraints are evident at small scales and energetic events, but they shape structure and behavior across the entire range of sizes up to cosmological scales.
By analyzing the frequencies and lengths interactions can take between different environments while fulfilling these conditions, we can find new explanations for phenomena, while also being able to discard some artifacts in our current theories. The unexpected links between the largest scales and the quantum world open up a new perspective for the universe, with far reaching implications:
The Planck scale:
We’ve seen that ħ regulates the maximum lengths interactions can take in spacetime, depending on the mass or energy content (the number of constraints) present in a given region. As no interaction can ever represent less than a Planck length to an observer, no accumulation of matter or energy, however large or dense, could ever exceed Planck density in its reference frame. So the same constraints that prevent matter from reaching the speed of light ensure it never exceeds Planck acceleration or Planck density. No region or phenomenon, however extreme, can ever break the observer’s Planck scale if the observer exists to detect it.
So, in line with the principles of Special and General Relativity, the kinetic and potential energy densities within and around the observer set a specific granularity for its perception, defining its frame-dependent units, all while adhering to the universal constants. The unfolding interactions can have any length, rate or proximity in spacetime, yet as they’re bound by the universal constants, they always define a minimum scale that perfectly matches h, the observer’s minimal notion of uncertainty.
When the observer changes location, the conditions impacting its Planck units may vary, yet their ratios remain constant, so the observer consistently perceives its units and the constants they display as invariant. That’s why the observer feels its proper time always flows at the same rate, despite any time dilation effects, or its Planck scale is always the same size, despite the actual scale it takes in spacetime. The Planck scale isn’t universal, but subjective.
According to relativity, there can be no preferred scale set in spacetime because that would break Lorentz invariance, but the Planck scale is not set directly in spacetime, only within the observer’s subjective notion of spacetime. As the observer builds its idea of reality through interactions and exchanges that always adhere to the specific Planck constraints required by its environment, both the observer and the observed comply with frame invariance at all times.
Phenomena typically change compared to a relatively stable reference frame, but effects such as length contraction or spaghettification become apparent when internal units undergo drastic changes compared to external phenomena, as the observer changes its motion or environment at an unusual rate.
The lighter regions:
The lower the density in a region, the fewer the interactions that can reach a denser observer, as most interactions won’t fit its required Planck quantization. So the greater the density differences, the larger the mismatch between Planck scales, the fewer the ensuing interactions.
This explains why denser observers must postulate some form of missing energy, substance or property in spacetime to account for the observed dynamics in less time-dilated regions, since only a fraction of the interactions that easily unfold within those regions can actually reach denser observers, so they deduce a missing cohesive agent acting there, more prevalent in lighter areas.
These “dark” interactions can even account for gravitational lensing, as they’re real interactions constraining how other interactions should unfold when passing through, while also explaining why the force of gravity seems to change with no apparent cause for the observer, as it can only detect the effects of these interactions indirectly.
The need for Dark Matter or modified gravity stems from an observational bias tied to the observer’s subjective reference frame and Planck units, with interactions that can’t unfold in its environment accounting for the observed galactic rotation anomalies, the Radial Acceleration Relation or the Tully-Fisher relation, all showing higher accelerations than expected in lighter regions.
The distant regions:
Regardless of their location, observers consistently perceive all other regions as redshifted. The farther the region, the lower the frequencies of light, in direct proportion to distance.
This effect would result from how energy density constrains unfolding interactions. The presence of particles in outer space (even when not directly interacting) alters how interactions can unfold, making them trace longer paths around particles rather than the straight paths they’d take in a perfect vacuum. These small detours add up, so the difference between Euclidean distances and real geodesics gets larger the longer the interactions, and the wavelength components corresponding to the non-existent short paths become impossible, causing light to redshift as photons self-interfere upon passing each particle.
So in addition to the gravitational redshifts and blueshifts imparted on light by massive objects “curving” spacetime, there’s a subtle redshift contribution as light navigates around the widespread distribution of particles “crumpling” spacetime. As a result, the observer always perceives spacetime to be warped with distance no matter location, which would solve Olbers’s paradox even for a steady state universe.
Cosmological redshift would be an interference effect induced by the presence of energy density, rather than the negative pressure of the intrinsic density of an expanding spacetime.
Relativity made it easy to explain phenomena through the warping of spacetime, so the idea of expansion came naturally for redshift. However, this concept didn’t really arise from a physical need of the universe, but a technical limitation of the theory, and it was even modeled reusing a free parameter introduced to fix previous theoretical considerations.
Upon the first detections of redshift, early explanations involved Tired Light or Doppler effects, but they led to inconsistencies and were later dismissed. General Relativity also faced challenges due to the single-valued nature of curvature in the gravitational potential and the metric tensor, since observers mutually perceive each other as redshifted, yet no place can curve two ways at once. To address this, lambda, a term initially introduced by Einstein to achieve a static universe, was adjusted for spacetime to expand, stretching light and allowing galaxies to separate faster than c.
But to us, the concept of expansion is more mathematical than physical, and lambda would just be the corrective factor relativity needs to invoke by considering spacetime is flat even with particles, as it doesn’t consider lensing effects below the planetary scale (let alone at the quantum level), so it slowly diverges from reality by accumulating tiny projection errors akin to mapping a sphere onto a plane. The lack of this level of detail makes relativity miss the physical cause for redshift, which must be reintroduced elsewhere, using lambda to correct the accounting.
The current ΛCDM model still relies on this mechanism in the form of Dark Energy, which infuses new space into the universe, fueling expansion at the constant rate expressed in Hubble’s law. But to us, the Hubble constant would just be another misleading factor translating the discrepancies between actual paths and Euclidean distances into ficticious recessional velocities between galaxies.
The smooth spacetime depicted in General Relativity is an idealization of a highly fractalized reality. Relativity literally mistook the map for the territory, so it provided us with a wrinkled map in constant need of flattening.
The denser regions:
We saw that different density regions have more difficulties exchanging interactions by the mismatch between their Planck scales, and that the presence of different densities between them introduces additional constraints, further limiting their interconnection.
But for some highly dense regions, any conceivable path connecting to the observer involves combinations of densities and distances that exceed the observer’s Planck constraints entirely, preventing any interactions from forming. This defines regions of spacetime completely isolated from the observer, enclosed by the black hole event horizons it perceives at different scales and distances throughout the universe.
We usually calculate where a black hole event horizon will develop by applying the Schwarzschild radius for its mass from its center, but this is just a practical aproximation. By definition, nothing within black holes can affect the observer, so all their properties (size, mass, distance, charge, rotation…) derive from relations and phenomena the observer can perceive. The appearance of black holes is defined by the behavior of the rest of the universe up to the horizon.
However, event horizons are not physical. They’re optical illusions, akin to rainbows. When the observer changes location, they shift in size and proximity as densities and distances change, always delineating where interactions can’t span to reach the observer. So event horizons aren’t places where spacetime breaks, but where Planck constraints prevent an inconsistent picture of reality.
Another misconception has to do with their gravitational pull. This is commonly attributed to the black hole’s mass, but this mass can’t directly drag the observer, as it’s beyond its physical reach. Like it happened with their properties, the behavior of black holes is set by everything outside, so the observer isn’t drawn directly by the black hole’s mass, but by the effects it creates on its environment. The warpings black holes create are again mediated by “dark” interactions the observer can’t detect, yet compel it to move due to the unbalanced conditions they imprint on its surroundings.
The absence of a counterforce to gravity seems to imply that the formation of a central singularity is inevitable. However, the increasing time dilation effects experienced at ever higher energy densities would prevent the singularity from forming by postponing it indefinitely. Just like matter can’t reach the speed of light due to the infinite energy this will require, matter falling into a black hole can’t form a singularity, due to the infinite time this will require.
The falling observer never crosses the event horizon it perceives, as each step the observer takes pushes the horizon farther away, while its ever higher density environment makes it difficult to interact with other regions due to increasing Planck mismatches, which also lead the observer to perceive a universe ever closer to the Planck scale. But as the radial time dilation effects within a black hole result in a distinct time-keeping for each layer, what for the observer appears to be a quick trip to the central singularity never happens before the unthinkably slow evaporation process from outside catches up. Black holes are huge “energy waterfalls” that take 10⁵⁰ to 10¹⁰⁰ years to fall as perceived from the outside, but evaporate in microseconds to days as perceived from the inside.
Nothing emerging to reality can break Planck constraints, so singularities and paradoxes are physically impossible, because, by adhering to the universal constants, any phenomenon unfolds to reality in perfect sync with what sustains its existence. Black holes don’t defy the laws of physics, only the concepts of distance, density and duration we are used to.
The cosmological horizon:
As we’ve seen, the energy density within a given path plays a crucial role in shaping the observer’s reality. When the mass-to-length ratio along a path reaches certain threshold, the Schwarzschild expression indicates that Planck limits are breached. But due to the universe’s average density and uniformity implied by the cosmological principle, there’s always a distance where this condition is met, and all points satisfying the c²/2 condition would define the cosmological horizon all around the observer, a set of distances no interaction can cover without breaking the Planck constraints set for its reference frame.
This means that every observer, no matter location, exists at the center of its own black hole, larger or smaller based on the density conditions of its surrounding environment. So, as hinted when discussing redshift, our concept of gravitational potential, linked to the metric tensor, is not an objective quality of the universe, but a subjective map each observer creates, based on location. Moreover, it’d be defined backwards, as it’s set to be zero at an infinite distance, yet no observer ever faces infinity, since it’s energy density which effectively dictates where the universe “ends” for each observer. The gravitational potential, like the Schwarzschild radius, should always be computed from the observer outward, which explains why the calculation of a black hole event horizon from its center was wrong, since this doesn’t give us the event horizon for an outside observer, but the cosmological horizon for an inside observer.
Besides these misconceptions, our model can address two other problems that arise at cosmological distances.
The first is the flatness problem. For standard cosmology, the convergence of the various density components defining the shape of the universe to make it appear nearly flat at this moment in history is a matter of chance requiring very specific inital conditions. But in our model, no fine-tuning is required, since the universe always conforms to the Schwarzschild solution, so it appears of the right age, size and density to match observer’s Planck constraints, regardless of the observer’s epoch, density, or location.
The other is the horizon problem. Regions on opposite sides of the universe appear too similar, when, according to standard cosmology, they had no time to exchange information or energy. But in fact, all regions about to break the observer’s Planck constraints will already look alike, so there may be no need for past exchanges or inflation, since the presence of density along the paths to the observer gradually smoothens the information for very distant and very dense phenomena, equalizing their features through interference, time dilation and redshift, until reaching the Planck limit. This mechanism could also link contested phenomena, like redshift quantizations and quasar controversies to the enforcement of Planck constraints.
In our model, spacetime has no density, so it can’t expand nor contract, and instead of a universe propelled by inflation or Dark Energy, we propose a subjective black hole cosmology, where each epoch in the ΛCDM model would correspond to a unique density environment that would dictate the properties (age, size, density, temperature…) of each observer’s reality.
In this “lava lamp” model of the universe, the different density regions exchange energy or information as dictated by the universal constants, and each observer exists at the center of its own “flat hole”, merging with those of others. The cosmos would evolve both sequentially in time and simultaneously in space, with all the processes of the different epochs coexisting, albeit in different energy density regions of spacetime.
The true end of the universe:
All observers, no matter their circumstances, have a finite perceptual range that spans from the Planck scale to the cosmological horizon.
However, the universe doesn’t really ends at the cosmological horizon, but effectively at about twice that distance, at a point where phenomena can’t exchange energy or information with anything the observer can perceive. This new boundary completely encloses all possible influences and forces that could affect its reality in any physical way, marking the true end of the world for the observer, which in order to correctly account for the stability and behavior of the universe, will require it to be roughly twice as old or wide as perceived.
We saw that Einstein created lambda to prevent the universe from collapsing, but General Relativity never considered how this unobservable region might counterbalance the gravitational pull of phenomena within the observable universe, preventing its gravitational collapse without the need for lambda. The universe wouldn’t collapse because there’s a kind of tensional integrity at each location, with phenomena within and beyond the cosmological horizon balancing each other gravitationally. This might even account for Dark Flow as the influence this region exerts through exchanges we can’t physically detect, akin to the “dark” interactions we posited for Dark Matter.
So the concept of spacetime density was never required, and lambda’s misuse was twofold: first to compensate for a collapse that never happens, and then incorrectly explaining redshift as expansion. Lambda, indeed, turned out to be Einstein’s “biggest blunder”.
But the most striking consequence of this model is that it doesn’t matter if the universe is infinite or not, because, as long as it has some density, it physically ceases to exist at a certain distance. So we may never know whether the universe had a beginning, since a finite distance always implies a finite time, and the 13.8 billion years we think represent the age of the universe may only be the observational limit its average density allows us to perceive. Even doubling that, the universe may be eternal or infinite without us having the means to prove it.
This could explain the large scale primordial fluctuations present in the CMB without the need for inflation, or the presence of regular galaxies at large cosmological distances, since there’d be no time limit for these patterns to develop.
Complete view of the universe:
Relativity requires a continuous spacetime, but we know our world is quantized. According to our new understanding of reality, the universe has no choice but to be discrete, since every entity or process is just a collection of instant but lightlike interactions. In our model, the underlying “conceptual” spacetime is always flat and continuous, but each unfolding interaction is discrete, while also representing a difference or change to an observer.
Our perspective regards spacetime not as a physical entity, but as a reflection of how physical entities relate, so matter can’t physically bend spacetime to tell other matter how to move, but we can state that the fact that certain patterns develop conditions how other patterns should unfold by the mere fact of having existed.
To the lightest entities in deep space at rest with the CMB, the observable universe appears vast, ancient and slow. Events in other similar low-density regions appear normal, while high-density regions seem to unfold in slow motion because interactions are more constrained there. They can explain most of the dynamics and lensings they observe with normal matter and radiation, but there are regions that, despite their distance, can’t connect due to their density, and regions that, despite their density, can’t connect due to their distance.
On the other hand, for entities moving at relativistic speeds near supermassive black holes, the observable universe appears compact, young, and fast. Events in similar high-density regions look normal, while the low-density regions they can observe seem weirdly energetic because interactions aren’t as constrained there. They can’t explain all dynamics and lensings with normal matter and radiation, often having to rely on exotic influences and forces, but as said before, some regions can’t connect either due to density (despite their distance), or distance (despite their density).
Energy is relative and depends on the observer’s environment, so what appears as heat or useless energy to a low-density observer means high-frequency, usable energy to a high-density observer. Even in the form of Hawking radiation or quantum fluctuations, energy is nothing more than interactions unfolding in spacetime fulfilling some constraints, so what drives reality might never cease to exist. The heat death of the universe or the Big Bang may just be two specific states in a never ending chain of events that doesn’t care about the duration, quantity, complexity, size, compactness or pace of the ongoing interactions, entities and processes configuring the cosmos at a given time.
Reality arises from chance events, yet the range of possibilities is set by past conditions, so once a state materializes, reality can’t do as if it never happened, since those conditions are present in the spacetime intervals other interactions trace from then on. A seemingly complex state can take eons to develop, but once it happens, the universe is forced to take it into account, which solves the Boltzmann brain argument by asserting that fluctuations, rather than compete for existence, must build upon each other.
Now that we know that the same constraints present at the Planck scale govern cosmological scales, we can make sense of the Dirac large numbers hypothesis. Electromagnetism and gravity share similar expressions, and the atomic and cosmological scales share similar size-to-force ratios because everything physical must conform to the universal constants, yet some physical laws deviate from perfect inverse square laws because the presence of matter and energy influences both the propagation of light and the perception of gravity.
We can now also understand the root of the vacuum catastrophe. Quantum Mechanics and standard cosmology estimate a value of vacuum energy that difers by 120 orders of magnitude. The quantum approach depicts the entire universe as a uniform environment at Planck critical density, suggesting that any observer will perceive a Planck-scale universe, which, judging by our own, is not the case, and we already saw that lambda, defined as the inherent density of spacetime, should be zero. So it’s no mystery that comparing the most extreme scenario with an incorrect guess for the opposite case yields “the worst prediction in physics”.
Conclusion
With Special Relativity, Einstein dethroned absolute space and time, revealing them as mathematical abstractions. He also eliminated the need for the luminiferous aether, since his theory of relative relations made an undetectable physical medium redundant. But in developing General Relativity, he departed from a complete Machian view of the universe, and endowed spacetime with new properties and abilities, compromising what he had achieved, since it restored the concept of spacetime as being real.
Despite the remarkable scientific and technological progress made in the last century, our comprehension of spacetime still reminds the Middle Ages. Black hole singularities are the “here be dragons” of our time, and we still resort to the drain analogy to “explain” what happens to spacetime beyond an event horizon, as if sweeping it under the rug.
The idea of spacetime as an independent backdrop for phenomena is so ingrained in our minds that we think they can diverge in ways nature can’t handle. But one can’t break the other, since they define each other, as General Relativity tries to convey through its convoluted field equations. The universe is not a play, but a performance, so the idea of a perfectly crafted initial state evolving through the script of the laws of nature must yield to a perspective where chance interactions align with the consistency constraints required at each moment in order to emerge. We won’t get rid of the paradoxes and inconsistencies that plague our theories until we understand that this interdependence between phenomena and spacetime leaves no room for error.
Is it coincidence that the various densities in the ΛCDM model almost perfectly agree with the density for a flat universe of that age? That the sum of density components at any epoch almost yields the correct size and age for a flat universe of that density? That the size ratios between galactic voids, galactic rims and galactic cores closely match the required Dark Energy, Dark Matter and baryonic matter components to explain redshift, MOND and Newtonian dynamics? That the speed of light, when divided by the radius of a flat universe with the age of our own, yields a Hubble constant at the midpoint of the Hubble tension? That the CMB is the most perfect black-body spectrum ever measured?
All these clues point to the relevance of the Planck constant at all scales and densities, suggesting that the world is not built from the top down, with a separate quantity for each aspect of reality, but from the bottom up, with all our macroscopic concepts emerging from the same building block that accounts for the minimum uncertainty the measurement process implies.
Whenever nature seems to require perfect coincidences or adjustments to show balance throughout the universe, we can rest assured it has to do with our rest frame and Planck units. But we are so self-centered in our experience of the world that we either presume nature conspires behind our backs or bends to our will. We tend to believe that what escapes our means of detection can’t comprise reality, that what comprises reality can’t escape our means of detection, and that reality even waits in superposition until we decide to make it real through our detections.
Most problems in contemporary physics still derive from this anthropocentric view of the universe, whether it’s by giving general validity to our subjective perspective, by switching between different perspectives without realizing it, by merging different perspectives into an unphysical one, by dealing with issues that only arise from an incomplete or incorrect perspective, or by thinking our perspective is the right one when others should also be included.
The universe we experience must reflect the inner workings of reality, but every time we face a puzzling observation, we take the simplest theoretical solution, often trapping ourselves for what lies ahead, without realizing that many of the abstractions we use (massless quanta, point particles, time instants, rest masses…) are only partial approximations that consider some elements irrelevant, forgetting that what they describe cannot really exist without what they leave out.
We simplify by neglecting some parts highlighting others, yet we’re shuffling the same pieces in different ways, complicating things with implicit missdirections, when we should carefully consider which mechanisms and concepts are key to describe reality, rather than engage in new theoretical developments at the slightest complication.
The typical approach in physics is to introduce new concepts to explain phenomena, but all enigmatic cosmological conundrums can be explained by removing just one thing: possible paths in spacetime through interference. In regions devoid of energy, reality can unfold with unlimited possibilities, but nothing forces it to do so. Paradoxically, constraining this freedom gives rise to the wide variety of behavior and phenomena we observe within the cosmos.
Space, time, matter and energy are just partial aspects of the same intricate system, where all interactions satisfy c asserting their instant nature, all entities and processes satisfy G to balance concentrations and flows, and all patterns satisfy h as required by all other patterns supporting their existence. When one element appears to be out of bounds, another must yield in order to conform to the universal constants, for they are the embodiment of a consistent reality.
Einstein could only unify space and time once he understood how they relate through c. Now it’s time to unify matter, energy and spacetime by understanding how they relate through h.