• 0 Posts
  • 6 Comments
Joined 2 days ago
cake
Cake day: November 15th, 2025

help-circle
  • SmokeyDope@piefed.socialtoScience Memes@mander.xyzConfirmed
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    Did you write this, genuinely? It is pure poetry such that even Samurai would go “hhhOooOOooo!”.

    Yes I did! I spent a lot of time cooking an axiomatic framework for myself on this stuff so im happy to have the opportunity to distill my current ideas for others. Thanks! :)

    And it is so interesting, because, what you are talking about sounds a lot like computational constraints of the medium performing the computation. We know there are limits in the Universe. There is a hard limit on the types of information we can and cannot reach. Only adds fuel to the fire for hypotheses such as the holographic Universe, or simulation theory.

    The planck length and planck constant are both ultimate computational constraints on physical interactions, with planck length being the smallest meaningful scale, planck time being the smallest meaningful interval (universal framerate), and plancks constant being both combined to tell us about limits to how fast things can compute at the smallest meaningful distance steps of interaction, and ultimate bounds on how much energy we can put into physical computing.

    Theres an important insight to be had here which ill share with you. Currently when people think of computation they think of digital computer transistors, turing machines, qbits, and mathematical calculations. The picture of our universe being run on an aliens harddrive or some shit like that because thats where were atas a society culturally and technologically.

    Calculation is not this, or at least its not just that. A calculation is any operation that actualizes/changes a single bit in a representational systems current microstate causing it to evolve. A transistor flipping a bit is a calculation. A photon interacting with a detector to collapse its superposition is a calculation. The sun computes the distribution of electromagnetic waves and charged particles ejected. Hawking radiation/virtual particles compute a distinct particle from all possible ones that could be formed near the the event horizon.

    The neurons in my brain firing to select the next word in this sentence sequence from the probabilistic sea of all things I could possibly say, is a calculation. A gravitational wave emminating from a black hole merger is a calculation, Drawing on a piece of paper a calculation actualizing a drawing from all things you could possibly draw on a paper. Smashing a particle into base components is a calculation, so is deriving a mathematical proof through cognitive operation and symbolic representation. From a certain phase space perspective, these are all the same thing. Just operations that change the current universal microstate to another iteratively flowing from one microstate bit actualization/ superposition collapse to the next. The true nature of computation is the process of microstate actualization.

    Lauderes principle from classic computer science states that any classical computer transistor bit/microstate change has a certain energy cost. This can easily be extended to quantum mechanics to show every superposition collapse into a distinct qbit of information has the same actional energy cost structure directly relating to plancks constant. Essentially every time two parts of the universe interact is a computation that cost time and energy to change the universalmicrostate.

    If you do a bit more digging with the logic that comes from this computation-as-actualization insight, you discover the fundamental reason why combinatorics is such a huge thing and why pascals triangle/ the binomial coefficents shows up literally everywhere in STEM. Pascals triangle directly governs the amount of microstates a finite computational phase space can access as well as the distribution of order and entropy within that phase space. Because the universe itself is a finite state representation system with 10^122 bits to form microstates, it too is governed by pascals triangle. On the 10^122th row of pascals triangle is the series of binomial coefficent distribution encoding of all possible microstates our universe can possibly evolve into.

    6vpb2u8IWBxOfrG.png

    This perspective also clears up the apparent mysterious mechanics of quantum superposition collapse and the principle of least action.

    A superposition is literally just a collection of unactualized computationally accessable microstate paths a particle/ algorithm could travel through superimposed ontop of eachother. with time and energy being paid as the actional resource cost for searching through and collapsing all possible iterative states at that step in time into one definitive state. No matter if its a detector interaction ,observer interaction, or particle collision interaction, same difference. each possible microstate is seperated by exactly one single bit flip worth of difference in microstate path outcomes creating a definitive distinction between two near-equivalent states.

    The choice of which microstate gets selected is statistical/combinatoric in nature. Each microstate is statistically probability weighted based on its entropy/order. Entropy is a kind of ‘information-theoretic weight’ property that affects actualization probability of that microstate based on location in pascals triangle, and its directly tied to algorithmic complexity (more complex microstates that create unique meaningful patterns of information are harder to form randomly from scratch compared to a soupy random cloud state and thus rarer).

    Measurement happens when a superposition of microstates entangles with a detector causing an actualized bit of distinction within the sensors informational state. Its all about the interaction between representational systems and the collapsing of all possible superposition microstates into an actualized distinct collection of bits of information.

    Plancks constant is really about the energy cost of actualizing a definitive microstate of information from a quantum superposition at smaller and smaller scales of measurement in space or time. The energy cost of distinguishing between two bits of information at a scale smaller than the planck length or at a time interval smaller than planck time will cost more energy than the universe allows in one area (it would create a kugelblitz pure energy black hole if you tried) and so any computational microstate path that differs from another with less than a plancks length worth of disctinction are indistinguishable, blurring together and creating a bedrock limit scale for computational actualization.

    But for me, personally, I believe that at some point our own language breaks down because it isn’t quite adapted to dealing with these types of questions, as is again in some sense reminiscent of both Godel and quantum mechanics, if you would allow the stretch. It is undeterminability that is the key, the “event horizon” of knowledge as it were.

    language is symbolic representation, cognition and conversation are computations as your neural network traces paths through your activation atlas. Our ability to talk about abstractions is tied to how complex they are to abstract about/model in our mind. The cool thing is language evolves as our understanding does so we can convey novel new concepts or perspectives that didn’t exist before.


  • Thank you for your thoughtful response! I did my best to cook up a good reply, sorry if its a bit long.

    Your point that we can simply “add new math” to describe new physics is intuitively appealing. However, it rests on a key assumption: that mathematical structures are ontologically separate from physical reality, serving as mere labels we apply to an independent substrate.

    This assumption may be flawed. A compelling body of evidence suggests the universe doesn’t just follow mathematical laws, it appears to instantiate them directly. Quantum mechanics isn’t merely “described by” Hilbert spaces; quantum states are vectors in a Hilbert space. Gauge symmetries aren’t just helpful analogies; they are the actual mechanism by which forces operate. Complex numbers aren’t computational tricks; they are necessary for the probability amplitudes that determine outcomes.

    If mathematical structures are the very medium in which physics operates, and not just our descriptions of it, then limits on formal mathematics become direct limits on what we can know about physics. The escape hatch of “we’ll just use different math” closes, because all sufficiently powerful formal systems hit the same Gödelian wall.

    You suggest that if gravity doesn’t fit the Standard Model, we can find an alternate description. But this misses the deeper issue: symbolic subsystem representation itself has fundamental, inescapable costs. Let’s consider what “adding new math” actually entails:

    1. Discovery: Finding a new formal structure may require finding the right specific complex logical deduction path of proof making which is an often expensive, rare, and unpredictable process. If the required concept has no clear paths from existing truth knowledge it may even require non-algorithmic insight/oracle calls to create new knowledge structure connective paths.
    2. Verification: Proving the new system’s internal consistency may itself be an undecidable problem.
    3. Tractability: Even with the correct equations, they may be computationally unsolvable in practice.
    4. Cognition: The necessary abstractions may exceed the representational capacity of human brains.

    Each layer of abstraction builds on the next (like from circles to spheres to manifolds) also carries an exponential cognitive and computational cost. There is no guarantee that a Theory of Everything resides within the representational capacity of human neurons, or even galaxy-sized quantum computers. The problem isn’t just that we haven’t found the right description; it’s that the right description might be fundamentally inaccessible to finite systems like us.

    You correctly note that our perception may be flawed, allowing us to perceive only certain truths. But this isn’t something we can patch up with better math. it’s a fundamental feature of being an embedded subsystem. Observation, measurement, and description are all information-processing operations that map a high-dimensional reality onto a lower-dimensional representational substrate. You cannot solve a representational capacity problem by switching representations. It’s like trying to fit an encyclopedia into a tweet by changing the font. Its the difference between being and representing, the later will always have serious overhead limitations trying to model the former

    This brings us to the crux of the misunderstanding about Gödel. His theorem doesn’t claim our theories are wrong or fallacious. It states something more profound: within any sufficiently powerful formal system, there are statements that are true but unprovable within its own axioms.

    For physics, this means: even if we discovered the correct unified theory, there would still be true facts about the universe that could not be derived from it. We would need new axioms, creating a new, yet still incomplete, system. This incompleteness isn’t a sign of a broken theory; it’s an intrinsic property of formal knowledge itself.

    An even more formidable barrier is computational irreducibility. Some systems cannot be predicted except by simulating them step-by-step. There is no shortcut. If the universe is computationally irreducible in key aspects, then a practical “Theory of Everything” becomes a phantom. The only way to know the outcome would be to run a universe-scale simulation at universe-speed which is to say, you’ve just rebuilt the universe, not understood it.

    The optimism about perpetually adding new mathematics relies on several unproven assumptions:

    • That every physical phenomenon has a corresponding mathematical structure at a human-accessible level of abstraction.
    • That humans will continue to produce the rare, non-algorithmic insights needed to discover them.
    • That the computational cost of these structures remains tractable.
    • That the resulting framework wouldn’t collapse under its own complexity, ceasing to be “unified” in any meaningful sense.

    I am not arguing that a ToE is impossible or that the pursuit is futile. We can, and should, develop better approximations and unify more phenomena. But the dream of a final, complete, and provable set of equations that explains everything, requires no further input, and contains no unprovable truths, runs headlong into a fundamental barrier.


  • SmokeyDope@piefed.socialtoScience Memes@mander.xyzConfirmed
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    14 hours ago

    You are close I think! though its not quite that simple IMO. . According to penrose spacetime diagrams the roles of space and time get reversed in a black hole which causes all sorts of wierdness from an interior perspective. Just like the universe has no center, it also has no single singularity pulling everything in unlike a black hole. The universe contains many singularities, a black hole contains one singularity that might connect to many universes depending on how much you buy into the complete penrose diagrams.

    RSgZdrlNP5661Ma.png

    71i1aSqo4oJkvaY.png

    Now heres where it gets interesting. Our observable universe has a hard limit boundary known as the cosmological horizon due to finite speed of light and finite universe lifespan. Its impossible to ever know whats beyond this horizon boundary. similarly,black hole event horizons share this property of not being able to know about the future state of objects that fall inside. A cosmologist would say they are different phenomenon but from an information-theoretic perspective these are fundamentally indistinguishable Riemann manifolds that share a very unique property.

    LZht0PvNgQCH0wg.jpg

    They are geometric physically realized instances of uncomputability which is a direct analog of godelian incompleteness and turing undecidability within the universes computational phase space. The universe is a finite computational system with finite state system representation capacity of about 10^122 microstates according to beckenstein bounds and Planck constant. If an area of spacetime exceeds this amount of potential microstates to represent it gets quarantined in a black hole singularity so the whole system doesnt freeze up trying to compute the uncomputable.

    The problem is that the universe can’t just throw away all that energy and information due to conservation laws, instead it utilizes something called ‘holographic principle’ to safely conserve information even if it cant compute with it. Information isn’t lost when a thing enters a black hole instead it gets encoded into the topological event horizon boundary itself. in a sense the information is ‘pulled’ into a higher fractal dimension for efficient encoding. Over time the universe slowly works on safely bleeding out all that energy through hawking radiation.

    So say you buy into this logic, assume that the cosmological horizon isn’t just some observational limit artifact but an actual topological Riemann manifold made of the same ‘stuff’ sharing properties with an event horizon, like an inverted black hole where the universe is a kind of anti-singularity which distributes matter everywhere dense as it expands instead of concentrating matter into a single point. what could that mean?

    So this holographic principle thing is all about how information in high dimensional topological spaces can be projected down into lower dimensional space. This concept is insanely powerful and is at the forefront of advanced computer modeling of high dimensional spaces. For example, neural networks organize information in high dimensional spaces called activation atlases that have billions and trillions of ‘feature dimensions’ each representing a kind of relation between two unique states of information.

    orUFuJRVuuQl0zZ.webp

    So, what if our physical universe is a lower dimensional holographic projection of the cosmological horizon manifold? What if the unknowable cosmological horizon bubble surrounding our universe is the universes fundimental substrate in its native high dimensional space and our 3D+1T universe perspective is a projection?



  • Gödel proved decades ago for all of mathematics including theoretical physics that a true ToE can’t exist. The incompleteness theorem in a nutshell says no axiomatic system can prove everything about itself. There will always be truths of reality that can never be proven or reconciled with fancy maths, or detected with sensors, or discovered by smashing particles into base component fields. Really its a miracle we can know anything at all with mathematical proofs and logical deduction and experiment measurement. Its still possible we ca solve stuff like quantum gravity but no gaurentees.

    Something you need to understand is that physicist types dont care about incompleteness or undecidability. They do not believe math is real. Even if its mathatically proven we cant know everything in formal axiomatic systems, theoretical physicist will go “but thats just about math, your confusing it with actual physical reality!” . They use math as a convinent tool for modeling and description, but absolutely tantrum at the idea that the description tools themselves are ‘real’ objects .

    To people who work with particles, the idea that abstract concepts like complex numbers or Gödel’s incompleteness theorems are just as “real” as a lepton when it comes to the machinery and operation mechanics of the universe is heresy. It implies nonphysical layers of reality where nonphysical abstractions actually exist, which is the concept scientific determinist hate most. The only real things to a scientific determinist is what can be observed and measured, the rest is invisible unicorns.

    So yes its possible that there is no ToE or GUT because of incompleteness and undecidability, but physicist dont care and theres something alluring about the persuit.


  • SmokeyDope@piefed.socialtoScience Memes@mander.xyzConfirmed
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    16 hours ago
    1. Rotation is meaningless without an external reference frame to compare against. Consider that right now the planet (and your body) is rotating at ~1000km/h but to us it feels stationary. We only know the planet rotates because we observe effects like the sun,moon,stars rotate around us (which ancient peoples misunderstood as earth-centerism thinking everything rotates around us)

    2. Rotation mathematically requires a center axis to rotate around. There is no true center to our observable universe, only subjective perspective reference frames. wherever you are is the center from your perspective. So there is no definitive geometric center axis of our universe to rotate around.