Reality - Version 0.1

There are many legit reasons. But yes to avoid the overhead of abstract functions and bad optimization on the part of the compiler writer. Binary operations like adding and subtracting I imagine would be good for building massive fields of affectance.

By “point”, I merely mean a chosen location within the space. And yes, choose a type of averaging that will yield the kind of field you are trying to form (smoother, choppier,…).

And you will want to make that a “hook” so that you can play with it later.

A point can be chosen with any resolution I guess, obviously there is more than two “things” involved but points are infinitely small so I could base my average on a small or large selection.

Seems like the least of your worries.

I doubt it. You’re going to have a lot of calculations involving 1/x, 1/x^2 or 1/x^3 which will take up more of the processing.

I like the way you put that all of that.

You could, just for example, have a linear decrease in afflate characteristic with distance from surrounding afflates. Or it could be exponential, giving a more choppy effect. I would recommend a very limited local distance for the range of evaluation so as to not consume too much processor time.

I had to think for a while to figure a way to quickly evaluate each afflate’s (or location point’s) immediate surrounding. Of course, you are free to choose your own method.

Why do you say that?

You are not looking at it the way I am.

You are going to be needing a lot of:

$$dx = Af1.x - x$$ $$dy = Af1.y - y$$ $$dz = Af1.z - z$$
$$d = \sqrt{dx^2 + dy^2 + dz^2}$$

Cause compilers are pretty good and flexible these days. Even the old school coders are abandoning assembler because they don’t get a significant payback any more.

Your big problem is figuring out the fuzzball interactions.

Yeah, obviously. I personally would get a handle on the interaction of a couple (or handful) of fuzzballs before I worried about 200,000 fuzzballs.

Fair enough.

I do have an idea but for the life of me I can not seem to get it into words at the moment - my head is in the coding. I will code some interactions up then explain them. I have to go with my gut I think - if I get into trouble perhaps you can point out why.

Plus you put me off phyllo :evilfun: just messin with ya.

What does this notation mean?

Realize that the afflates do not interact with each other directly (phyllo’s concern). Each afflate interacts with it’s immediate ambient field, which is an indirect association with the combination of the other local afflates. Each afflate interacts with the averaged affect of all local afflates.

  • dx = difference in x values, “differential”.
  • Af1.x = x coordinate of Afflate1 (then Af21.x, Af53.x,…)
  • x = x coordinate of the point of interest.
  • d = distance between point of interest and a local afflate.

Actually, I guess that your notation will be more like, “Af.1.x” or some such.

One could do the simple yet stupid thing and just search the entire list of afflates to see which ones are close enough to be concerned with. To save processor search time, I chose to divide the entire cubic space into many cubical regions, 40x40x40 regional cubes. Every time an afflate moved, I recorded which region it was in. Then each region had a list of the only afflates of concern. Then in order to save memory space (perhaps not relevant in your case), I created a set of dynamic link-list functions to prevent wasted space (AddToLinkList, DeleteFromLinkList,…). Each afflate remembers the region. Each region stores an entry point into the link-list for all afflates. The search time turned out to be very quick without wasting memory resource.

Of course going down the link-list, each associated afflate had to be measured for closeness and varied trial formulas were used to decide how much affect each close afflate would have on the average at the point of interest (the afflate under calculation). The size of the afflates is very relevant at this point.

An additional issue is raised when you realize that one cannot merely examine a single region in order to average in all near afflates. Depending on how close the afflate is to a regional border, adjacent regions must also be included in the averaging. Unfortunately that requires the process to be considerably longer. I used a special table to provide quick region adjacency pointers, but the processor still had to go through each adjacent region’s link-list.

If I remember right, other than the video encoding, that examination of the local region to resolve the ambient affectance took most of the processor time. Once the ambience is resolved (still involving much more than currently discussed), how to move to afflate is very complex, involving the more sophisticated math (trig functions and such), but not terribly time consuming.

I just want to get a few things of my chest . . .

First off - I do appreciate constructive criticism.

Now I am not interested in debates that are not worth my while so I am not trying to start one of those - but lets get things into perspective here. The first program I ever wrote was not on a desktop system but rather a chip - I programmed a micro-processor directly and that was 33 years ago when I was a child - in elementary school is what I guess you would call it, I was the top of my class - I am not good at conflict but I am a pretty smart guy. When it comes to physics, I have a pretty good grasp of it, you need not worry about that. I am an Australian - so there might be a few little differences in communications depending on where ever you guys are from.

The rubbish that people call software these days is highly abstracted and runs on amazing hardware - so everyone is living in a dream world if they think that compilers are better than ever. Microsoft C++ compiler is really good - it even has support for 2011 C++ standard - it is not the only good compiler but it is good. Assembly is kick ass - I probably use one of the better assemblers and have access to OpenGL and DirectX from it and believe me I see the difference.

You want benchmarks - fine - you want proof fine - whatever, but I am not here to have my time wasted, but I am here and I am not going anywhere. Obviously it is important to talk about programming - I get that but I think I know what is best.

Now to you directly James . . .

You have some fucking amazing ideas. Simple!

I am following what you are saying and I like it.

I have a lot to talk about with you on it - first off I am going to go over recent posts you have made regarding the field because I think your ideas are closer to mine.

The universe is not doing mathematics to do what it does - we do mathematics. What is important is for the little universe that we are creating to do the same things that the physical universe is doing and that is something that I can make happen.

James

You mentioned the following:

That is correct because afflates are a figment of our imagination. Afflates and the ambient field are the same thing. We are make the associations just the same as we do in physics. This can be said to be geometric and it can also said to be discrete requiring translation into linear mathematics - we can work in both directions.

I will be explaining further and there will not be much room for debate.

So lets create a field . . .

How we go about creating the field is not important as long as it represents reality - that is all that is important as far as I am concerned - I am able to look at things a different way when it comes to CPU’s and RAM - I have been emphasizing this all along. I can also help you make the conversions because we are going to be working in reverse here. You want to derive the mathematics the same as you have it written now and in the future come up with new mathematics.

I am going to be pushing and popping memory stacks and using pointers and what ever else takes your fancy that is too boring for me to talk about. EAX, EDX registers blah blah blah . . . I look at the memory as a field already . . . the CPU is just driving that field.

We will partition blocks of physical space of similar to what you suggested James and we can do averaging to get accurate results - interpolation - we can add values that do not exist but are true - for example:

We have 1 and 2

What goes between 1 and 2

Well . . . 1.5 of course - we can have the machine put that in - that is one less piece of code there . . .
You would argue except that this is part of the way to data compression.

Google writes it as this which is true: the insertion of an intermediate value or term into a series by estimating or calculating it from surrounding known values.

Compression and space . . .

What am I talking about? I am talking about getting space where it does not exist - something computers can be amazing at . . .

Lets, in theory consider a possibility. We can generalize averaging from the case of data sets that relate to n-dimensional Euclidean space. We go on to create spaces and integrate our interpolated results - this is simply put.

From a logical standpoint the average between two data values is the information required to obtain the new value with no entropy.

We just go on to use more than two values . . . this doubles the space available to create a field . . .

. . . and I can do better than this . . .

as you will see

James

There is nothing stupid about the following that you have written.

Simple and not stupid. The simplest solutions are the best - always. We can do more with this than what you state - much more. Fuck processor search time - we will have to go over every thing so why not makes updates along the way. I like what you said about dividing the entire cubic space into many cubical regions and this is along the lines that I am thinking - I just call them spaces is all. You said every time an afflate moved, I recorded which region it was in - this is great - and we can do better by using a bitmap and I am not talking about an image here - I am talking about bits becoming a map. We make our reference to the map - on a high we have an operation that must be performed and that same as on a low - then we use hooking as injection into other data-sets that are more precise - essentially abstracting but doing it quickly. Saving memory space is always relevant to me - not to noobs, but I don’t care - there are going to be many linked lists under the hood. Each afflate is actually a temporary citizen of a region(a space). finishing with what you said, each region(space) stores an entry point into the link-list for all afflates. The search time turned out to be very quick without wasting memory resource.

Then we just keep going about making improvements all the time.

Pick the error in logic here because this would be worth my while to fix - I can see it.