Rational Metaphysics: The Equation for Space

I’m reminded of the movie “War Games”, where the computer plays tic-tac-toe with all possible nuclear destruction modes and decides that it is all a lose-lose situation, shuts down our nuclear armaments and quits the game. Even a computer capable of 100% accurate prediction could only make predictions that would hold true for perhaps a nano-second until more variables forced it to make new predictions which would make “good enough” the same modality we use today. There might be an increase in accuracy prediction in the VERY short run, but ultimately, it’s accuracy would be no long-term better than what we experience now. “Fixing” the universe is nothing new. We been at it since we climbed down out of the trees, but even as we finally grasp that all is a constant flow of noumena, predicting that flow escapes us - and it doesn’t appear likely to ever change until the universe itself changes, which is highly unlikely.

Yes, that was pretty much my point.

But then the same had been said concerning flying.
There is no question that the scenario can be created.
There is no question that Man has on many occasions attempted such a thing.
And fairly recently, more than one such “god” was formed, thus causing each a serious problem in having to predict the other.
You are living in the aftermath of that contest right now.

But those aren’t the only scenarios available.

This is that “Equation of Space” that provides a single field which explains all others in physics.
Equation of Space.png

That equation is absolutely necessarily true, although I have not explained how to use it.

Would you mind telling me a bit about the equation itself (for example about the term to the right of the “p +”)?

In common English, the equation states that every point in space is defined by the sum of its potential-to-affect, “p”, and all changes to its potential-to-affect - the time derivatives.

In philosophical terms, it is merely stating that every point in existence is defined by the rate of change of its potential to alter the degree of existence (its ability to affect anything = its degree of existence).

The terms following the “p +” are the sum of all changes at all rates in p through time, expressed as the sum:
a0dp/dt +
d²p/dt² +
a2*d³ p/dt³ + …

wherein the “a” values are scalars suited to each point.

I thought so. But if all terms following the “p +” (thus the “pta +”) are “the sum of all changes at all rates in p through time”, then they have to include the entire time of the universe, thus also the future of the universe. Okay, this could also be a part of my thread “Universe and Time”.

The first key is to realize that every affect comes with a direction of affect and each and every 3D direction has its own equation of space for every point in space. All of those listed change rates are different for every possible direction. So for example, headed directly to the right, the following value set might apply for point A:
E{right} = [0.5, 0.01, 0.0001, 0.023, …]

But also at the exact same time at point A, headed in the upward direction, the following equation applies:
E{upward} = [0.5, 0.001, 0.02, 0.7, …]

Of course every angle must be handled and there are an infinite number of such directions in 3D, so the challenge got a bit tough. I not only had an infinite series for a billion points, but an infinite number of infinite series for each of a billion points, all of which had to be calculated for each picosecond tic.

The resolve took a very long time for my little brain to figure out. I basically had to prove that each preferred method (due to simplicity or speed) of emulating space would not work.

The project was to emulate affects propagating through a small bit of space in whatever direction they might take.

If you allow space to be represented by a matrix of points (locations), the question arises as to how these points are to be situated. Aristotle proposed that a tetrahedron could properly fill space. That turned out to be incorrect, although pretty close. What is called “space-filling” became my study for a while. I tried all kinds of combinations of shapes with which to fill space along with which equations would have to be used to emulate the propagation of an affect in any direction. I wanted to simplify for sake of speed and memory usage, but that turned out to be quite a challenge.

Not being able to use any of the standard methods and after getting very, very creative in coming up with new methods (that didn’t quite cut it), I almost gave up on being able to realistically emulate space. Eventually, it dawned on me that I could use the simple cube matrix to fill space, but I couldn’t calculate each point for each tic of time. Merely a 1000x1000x1000 pixel matrix, 1 billion points, would represent perhaps 10 nanometers of space and leave a matrix of 1 billion simultaneous equations to have to resolve for every tic of time (one “frame”), which might represent merely one picosecond or less. That would take an average PC possibly years to calculate each picosecond of time. So I had to find a way to update the state of that tiny metaspace without losing accuracy concerning the propagation of affects through the space and calculate thousands of tic frames within a reasonable completion time.

Eventually I figured out the “Afflate”.

An afflate (an affectance oblate) is a proposed, or selected, small portion of affectance that can be treated as a single propagating affect and a “virtual particle” (even though it is not an actual particle at all). And an afflate might have any small size or shape. An afflate is very similar to a photon, although greatly smaller than a light photon. Each afflate has many characteristics such as density, potential, and propagation direction.

What I finally realized is that I could track millions of these random afflates as they each propagated in their own direction rather than trying to calculate the changes that were occurring at each of a billion points. And that allowed for me to emulate the propagation of affects within the space. The end result eventually led to being able to watch particles form merely because of the manner in which random affects naturally occur, such as in these renderings:

The word “afflate” means probably the compound word of the two words “affectance” and “oblate”. Is it a tiny objectified affectance? Is it a tiny thing of affectance?

“There is nothing wrong with your monitor. Do not attempt to adjust the picture. We are now controlling the transmission. We control the horizontal and the vertical. We can deluge you with a thousand channels or expand one single image to crystal clarity and beyond. We can shape your vision to anything our imagination can conceive. For the next hour we will control all that you see and hear. You are about to experience the awe and mystery which reaches from the deepest inner mind of James.”

Sorry, I didn’t see that post earlier.

Yes, “afflate” is merely an ultra tiny amount of affectance of non-specific size. In the program, I have the computer randomly assign afflate sizes from ultra, ultra tiny to merely ultra tiny along with random densities and PtA levels. It usually takes from 20,000 to 200,000 within the small metaspace to emulate anything with reasonable statistical accuracy.

That anime is probably using 100,000 afflates to fill the 3D space with the intensity turned up so high that you can’t see into the space, merely the surface of the cube. What you see buzzing around are the random afflates. I used yellow for negative PtA levels and blue for positive with varied levels of each. The result is somewhat blue-greenish. Normally, with the intensity adjusted properly, all of that same activity is going on, but you don’t see it until it concentrates such as in the following one while the particle aggregates or accumulates.

The software is very precisely calculating each tiny movement based upon the general behavior of affectance upon affectance (afflates passing through other afflates). The calculation gets a bit hairy as it considers the density, cross wind, density slopes, and PtA variations, all in 3D and for each of the 100,000 afflates for each tiny movement. It takes quite a while to render one of those animes.

Actually for UP1001, this thread from 2012 introduces what I first called “Rational Metaphysics” before I realized that RM is merely the method for creating an ontology. The first ontology created from it is “Affectance Ontology”, thus now I refer to “RM:AO”. Affectance Ontology is a new foundation from which all fields of science (real science) become united, meaning that the exact same principles/“laws” apply to every field because they are logically necessary consequences of fundamental, relevant definitions.

An update to this topic:


This is all very interesting but practical demonstrations are more valid than either computer simulations or elegant theories. Potentially falsifiable hypotheses subjected to the rigour of the scientific method are what is needed here. Without that it is just speculation. Even if what you propose is actually true there is
no way of knowing. Your own self confidence is simply not good enough and so try and work towards testing these theories of yours if you can. I am especially
interested in the theory of faster than light travel regarding semi particles. Have you run that past any theoretical physicists and if so what was their reaction

Already been done. You are merely not in a situation to be able to know it. And it wasn’t a “simulation”, but rather an emulation. There is a very serious and relevant distinction. An emulation is a type of proof because the actual principles are being used so as to cause the result. If the theory isn’t right, you don’t get the right result. A simulation is merely for display purposes, often not using any of the principles involved but rather merely moving pictures around as per its program directive.

As with ALL modern day Science, YOU cannot know but only speculate that what you have been told in magazines is true. You will believe what I have been saying only when they tell you to believe it. And yes, I have taught it to practicing physicists. It all takes time and they don’t really care.

RM:AO rationally explains what has already been witnessed without having to twist space or time concepts. It is philosophically clean. But there is very little that it proposes as practical advantage over formulas already being used whether those formulas make any rational sense or not. So it is not like a grand revelation for practicing physicists. It is instead mostly just a unified, rationally consistent explanation with nothing to prove other than the consistency of the logic involved. They already know how to accomplish what they are after. RM:AO merely yields a logically consistent explanation for WHY what they do works. For instance, RM:AO gives the first rational explanation as to why the Young Double-slit experiment works the way that it does. But it doesn’t change the outcome. The RM:AO explanation can be falsified by a process described below.

The subject of faster than light travel is a little complex and probably not as interesting as you think. The idea of people or space ships traveling faster than light is not at all reasonable. There is a lot of explaining to do merely to understand the subject.

One does not have to believe any thing scientists say for two reasons. Firstly belief is an article of faith and has zero place in science
Secondly all scientific experiments can be replicated or explained. So it is not necessary to have to just accept the word of scientists

Regarding faster than light travel : objects of mass cannot travel faster than light because time would stop and start going backwards. Which would
violate the law of cause and effect so it is not physically possible. For even photons cannot travel faster than light and they do not experience time

But YOU are not a scientist or ontologist. All you can do is read what they tell you, not really any different than the Church.

It is true that mass particles can only travel faster than light for the very brief time while they are forming. But the reason isn’t because of time stopping. That is just another effect. Mass cannot be a particle if it is propagating at the speed of light. It would merely become light. And of course, there is no such thing as time running backwards (not counting the act of moving particles forwardly into a prior situation. Some magazines will call that reversing time).

And you are right that light photons do not experience time. Time is the measure of relative changing between two things and a light photon doesn’t have anything changing relative to anything else. The affectance that makes up the photon is all propagating together in the same direction as a single clump. As long as the photon is traveling in a straight line, no part of it is changing any differently than any other part, thus there is no time to be measured. While passing a gradient mass field, one side of the photon will be slowed slightly more than the other, causing the photon to bend its trajectory. During that process, the affectance that wasn’t slowed is lost while more is gained in the new direction.

This clip shows “Afflate” travel. An afflate is basically the same thing as a light photon, merely much smaller and simpler, thus afflates and light photons are “virtual particles” and behave the same:

These are some related anime gifs:


But your text here just indicates that you seem to “believe any thing scientists say”. I mean: You are told by scientists (namely by the current mainstream physicists), and obviously believe in their saying, that “time would stop amd start backwards”, if “objects of mass” travelled “faster than light”.

If the photons are particles and nothing else (thus also no waves), what are those indicated waves then? Do you think that they are merely a scientific mistake or a mistake of the observer?

Is it right that light photons are “virtual particles”, whereas photons are “particles”? I mean: Photons (thus: all photons) have always to do with light.

In English, “virtual” means “might as well be, even though technically not”. A photon (of anything) and all other so-called “virtual particles” are not physical particles yet largely behave as if they were particulate in that they maintain a large degree of cohesion, maintaining themselves as an object for at least a short time. Mass particles are extremely stable objects. Virtual particles might not remain cohesive for long at all.

The concept of the virtual particle stems from Quantum Mechanics translated into Quantum Physics. In Quantum Mechanics averages of millions of trials determine what is or isn’t a particle or property of a particle. It is a misuse of the language so as to confound students into a misrepresentation of reality (aka. “false ontology”, “Quantum Physics”). The good of it is merely that very accurate predictions can be made due to the millions of trials being the basis for the prediction. Any one singular incident can never be predicted in Quantum Mechanics, only the average behavior of a great many. If on average a certain amount of energy leaves from an interaction, that particular clump of energy is given a name and dubbed a “virtual particle”.

It is best to translate “virtual particle” as “sort-of-like particle” or “almost particle” or “amount of energy”, and in all cases, “not a stable mass particle”.

And the word “photon” somewhat graduated from referring only to clumps of visible light into referring to clumps of any EMR. A photon of EMR, visible light or otherwise, is merely a “puff of affectance traveling in a single direction”.