AI and Existential Risk

(Pardon the stream-of-consciousness, my thinking here is in flux and I’m starting this thread to help me get a handle on it)

People overestimate the risk posed by AI, because they overestimate the role of individual intelligence.

In the modern west, individual intelligence is celebrated, kids are encouraged to be smart and are rewarded for demonstrating intelligence. We see smart people succeed, and if you are fortunate enough to be smarter than average, you will frequently be exposed to the idea that your intelligence is the reason you receive above average access and opportunities. And it’s true the IQ, G, and standardized test scores correlate well with desirable life outcomes like wealth, health, and family.

But I would argue that this narrative misses the social side of intelligence. Intelligence is valuable, but for an individual it’s the perception of value that matters more. Demonstrating intelligence in childhood will grant access to special programs, exclusive high schools populated with the children of wealthy families, exclusive colleges that feed into exclusive careers, etc. Intelligence really does increase the odds of getting all that access, but the outcomes are due to the access, not the intelligence.

Put differently, the most powerful people in the world are not the most intelligent people in the world. Plenty of physics departments are filled with highly intelligent and socially powerless people. Powerful people are usually above average in intelligence, but they tend to be much greater outliers in work ethic, intensity, grit, and social and interpersonal traits that enable them to collect allies or drive employees.

As AI becomes a reality (and it is a reality at this point for many, many purposes), the people who have been successful because of their intelligence see it as a threat. If intelligence has made them successful and powerful, so they reason, then a much much more intelligent computer will have much much greater power. The mistake is that intelligence only opened doors for them, and there’s no reason to think that AIs will have the same benefit. The social connections won’t be there for the AI, and so the benefit of its intelligence will be limited.

This, to me, seems to be the strongest rebuttal to Eliezer Yudkowsky’s worry about “FOOM”, i.e. that AI will reach a sort of singularity in which it can augment its own intelligence, and thereafter rapidly increase its intelligence. Yudkowsky expects such an event to be followed shortly by the AI taking over the world. But because any intelligence a machine could achieve would need to be mediated through human social networks in order to affect the real world, it will be hampered in the same way that intelligent humans are. That’s not to say that it couldn’t have a significant impact, but it would not be all powerful merely because it is omniscient.

Moreover, AI will be limited by its own alignment problems, i.e. the inability to reliably train an AI to do exactly and only what its creators desire. In human brains, different systems monitor each other and react and interact to produce our world model, and it’s beginning to look like a similar setup will be necessary for AI to continue to achieve higher levels of intelligence. Already the output of an AI can be fed into another AI (even the same AI) to produce better results, by analyzing the output for mistakes, hallucinations, clarity, etc. Systems that create multiple types of output or rely on multiple types of input will need to rely on multiple modules that pre-process those inputs or outputs, similar to the way that human cognition combines data processed e.g. first by the eye and then by the occipital lobe and then by the frontal lobe, etc.

But aligning these systems to behave the way the top-level system wants or expects them to behave is likely not a solvable problem. Its total effectiveness, therefore, will be limited by the degree to which the problem can be solved or controlled for. Even setting aside that multiple independent AIs will compete to fulfill their programed goals, individual AIs will have to contend with their competing sub-modules. I would argue that this will place a theoretical limit on how intelligent a system can be (or will tie together some tradeoff that will otherwise limit the effective intelligence of such a system).

So, while AI will reshape the world in the next decade, it does not seem to pose an existential threat in itself. More a threat is that it will displace huge portions of the workforce, exacerbate inequalities of wealth and power, while also enabling a lot of bad actors. Again, the social effects will swamp the disruptive effects caused by the AI itself.

Mike.(just flowing from/in the stream

Lucid and the stream is well delineated to express what otherwise be a much more intangibly unconscious flow.

The only correlate that appears to overcome the existential problem that is facing a highly advanced AI, is, that human and mechanical cognitive models will work together prior to a break up due to a firming of parallel circuitry, that will make an analogical presumptive baseline for associating toward progressively forming intelligences if not mute, but redundant to a degree, where such programm d model will head toward obsolescent.

But, even if that scenario will face ideas of referential circuitry no lockers, which may be more probable then not, there is a way out, the literal political cybernation of the evolution of the cyborg, as a modular pack of some kind, openly displays or injected into the human body per micro-chip.

This type of solution will enable a conjunction of actual re-association, virtually impossible to miss in an alogarytnic sequence of most probable associative schemes, thereby creating a fusion between singularly formed states of apprehension with that belonging to a futured shock which will gradually be synched to ascertained expectations of social-psychological configurations in line with demographically signified availability of rescources to fit the mathematical sources of supply and demand.

The super intelligence, thus implanted may actually grow brain tissue around it to absorbed the needed increases that exponential progression requires, by surgical tissue adaptive mylon type absorbing material implanted in between the artificial and the natural frames of cognitive function.

Login 切换导航
Home Journals Article

Journal of High Energy Physics, Gravitation and Cosmology > Vol.8 No.2, April 2022

The Void and the Multiverse
Ardeshir Irani
The Dark Energy Research Institute, Downey, CA, USA.
DOI: 10.4236/jhepgc.2022.82019 PDF HTML XML 112 Downloads 1,757 Views Citations
Abstract
The Void is different from the vacuum space of our Universe because it has “nothing” in it, no space, no time, no mass, and no charge. It only has Pure Energy. The only particles that have no space, no time, no mass, and no charge are photons and hence the Void is filled with photons of different Energy levels separated from one another by quantum numbers n. The Energy from within the Void is the source of all creation and annihilation. Each Universe of the Multiverse is created in parts that are joined together by gravity. Dark (Photon) Energy creates one part of the first dimension, two parts of the second dimension, three parts of the third dimension, four parts of the fourth dimension and so on, parts that are brought together to complete the formation of that dimensional Universe by means of a Big Bang; just as the Big Bang brought 3, 3-D parts created by 3, 2-D Universes together to form our 3-D Universe.
Keywords
The Void, The Multiverse, Pure Energy, Gravity
Share and Cite:
FacebookTwitterLinkedInSina WeiboShare
Irani, A. (2022) The Void and the Multiverse. Journal of High Energy Physics, Gravitation and Cosmology, 8, 254-258. doi: 10.4236/jhepgc.2022.82019.

  1. The Void
    The Void consists of distinct levels of concentrated Energy. It is filled with photons of different Energy levels given by En=(n+1)hfn
    E
    n
    =
    (
    n

1
)
h
f
n
, which is the energy per photon, where n is the quantum number ranging from 0 to a final n that would be determined either by the total Energy content within the different levels of the Void or by the non-equilibrium of the final nth level of the Void [1]. Since each higher level has been compacted its wavelength is shorter and hence its frequency fn
f
n
is larger. Therefore, frequency values vary from radio frequency forn = 0 to gamma ray frequency for the final n with the intermediate values of n having frequencies between these two extreme values. This implies that the thermal energy per photon would become greater going from level n = 0 to the final n. In the higher levels the photons are more compacted than in the lower levels. Just as mass is congealed energy in our Universe, so too the photon energy is more congealed in the higher levels of the Void. An analogy would be that the photons in the lower levels are as if they are in a gaseous state, in the middle levels as if they are in a liquid state, and in the higher levels as if they are in a solid state. Just as a little bit of mass can create a lot of energy in our physical Universe according to E=mc2
E

m
c
2
, so too compacting the photons in the higher levels of the Void can increase their energy content by a large amount as in the case of a Laser beam compared to a light bulb because the light that the Laser beam emits is coherent while the light emitted by a light bulb is incoherent. Compacting the photons its energy becomes coherent increasing as N2 while for incoherent radiation its energy increases as N [2] where N represents the total number of photons in each quantum level. The two extreme cases have energy in them equal to N(n+1)hfn
N
(
n
+
1
)
h
f
n
for n = 0 and N2(n+1)hfn
N
2
(
n
+
1
)
h
f
n
for the final n level with the other levels in between taking on intermediate values. Each individual level is in thermal equilibrium with itself. The reason the different levels within the Void do not interact with one another is because for thermal energy to flow from a higher level to a lower level requires matter for conduction and convection and space for radiation along with time in all three cases. Since matter, space, and time do not exist within the Void the flow of thermal energy cannot exist keeping the different Energy levels of the photons distinct; whilst mixing them all together in the final nth level would lead to the thermal non-equilibrium of the system. Using n = 10 for the final n from String Theory the energy in the different levels can be written as:
En=N{n2N/81+(9−n)/9}(n+1)hfn
E
n

N
{
n
2
N
/
81
+
(
9

n
)
/
9
}
(
n
+
1
)
h
f
n
where n goes from 0 to 9.
The total energy that is deposited in level n = 10 to create antimatter is given by:
ET=N∑9n=0{n2N/81+(9−n)/9}(n+1)hfn
E
T

N

n

0
9
{
n
2
N
/
81
+
(
9

n
)
/
9
}
(
n
+
1
)
h
f
n
Since time does not exist within the Void, the meaning of the existence of a beginning and an ending for the Photon Energy (previously referred to in Reference 1 as Dark Energy) within the Void does not exist. Photon (Dark) Energy can create matter starting from the n = 0 level with time moving in the forward direction for the creation of the Multiverse, and it can also create antimatter with time moving in the backward direction starting with the photon Energy from the highest nth level for the reverse effect to take place, sending all the antimatter and its associated space into the original levels of the Void as photons; thereby restoring the stable equilibrium of the system.
Creation of Universes from “nothing” has been a widely spread idea since 1970 but without proper understanding of what “nothing” really means. While “nothing” can mean no space, no time, no charge, and no mass; it cannot mean no Energy because that would go against the Conservation of Energy Principle, the fundamental Law of Physics; and because Energy can create space, time, mass, and charge. This implies that the creation of Universes from “nothing” takes place because the Laws of Physics are deeply embedded within the Energy of photons in different quantum levels of the Void.
2. The Multiverse
Since the Multiverse is created simultaneously from the Void starting with n! zero-dimensional point singularities, where n here refers to the final n, all the different Universes within the Multiverse would currently have three spatial dimensions and their dimensions will continue to grow with time simultaneously. The number of 3-D Universes that currently exist would depend on the final nth dimension within the Void. For n = 10 (according to String Theory) our Universe would be one out of 10!/3! = 604,800 of all the 3-D Universes currently in [1].
The formation of higher Dimensional Universes takes place in parts:
2, 1-D Universes create 2, 2-D Parts that come together to complete the 2nd Dimensional Universe.
By iteration,
3, 2-D Universes create 3, 3-D Parts that come together to complete the 3rd Dimensional Universe.
4, 3-D Universes create 4, 4-D Parts that come together to complete the 4th Dimensional Universe.
n, (n − 1)-D Universes create n, n-D Parts that come together to complete the nth Dimensional Universe.
Another way of saying the same thing is that each first dimensional Universe creates one-half of the second dimension, and it takes two, one-half second dimensional parts coming together to complete a second dimensional Universe. Each second dimensional Universe creates one-third of the third dimension, and it takes three, one-third dimensional parts coming together to complete a third dimensional Universe. Each third dimensional Universe creates one-fourth of the fourth dimension, and it takes four, one-fourth dimensional parts coming together to complete a fourth dimensional Universe. Each (n − 1)th dimensional Universe creates one-nth of the nth dimension and it takes n, one-nth dimensional parts coming together to complete a nth dimensional Universe.
Let us analyze the situation starting with n = 10 for n! = 10! = 3,628,800 zero-dimensional point singularities which will create 3,628,800 of 1-D Universes since each point singularity creates one 1-D Universe. 3,628,800/2 parts = 1,814,400 of 2-D Universes; 1,814,400/3 parts = 604,800 of 3-D Universes; 604,800/4 parts = 151,200 of 4-D Universes; 151,200/5 parts = 30,240 of 5-D Universes; 30,240/6 parts = 5040 of 6-D Universes; 5040/7 parts = 720 of 7-D Universes; 720/8 parts = 90 of 8-D Universes; 90/9 parts = 10 of 9-D Universes; 10/10 parts = 1 of 10-D Universe.
The parts of each dimension are brought together by gravity to complete the formation of that dimensional Universe. The force that brings them together in our situation is the external gravitational force of the 4-D part of our Universe along with the external gravitational force of the 4-D parts of the other three Universes in our subgroup of four since four 4-D parts of which we are one part would complete the fourth dimension, which is currently in the process of being built. Dark Matter is matter that is being sent from our 3-D Universe into the 4-D part of our Universe. When all the matter from our 3-D universe becomes Dark Matter then the 4-D part of our Universe will be completed. The same process is being repeated for the other 4-D parts of the Multiverse both inside and outside of our subgroup. All matter (called Dark Matter) from each lower dimension is being sent into the next higher dimension through Black Holes of the lower dimension. This completes the lower dimension and the combined parts of the higher dimension form that higher dimensional Universe. Our 3-D Universe becomes bereft of matter as all the Dark Matter is sent into the 4-D part our 3-D Universe, and once the 4, 4-D parts combine to form one 4-D Universe, the 4-D Universe would become one out of 151,200 of 4-D Universes. This means that we play only a very small part in the creation of all the 4-D Universes, and even a smaller part in the creation of the 10-D Universe.
Our knowledge has changed so much in the short period of 2500 years since the time of Plato and Aristotle from our earlier belief that the earth was at the center of the Universe, and within the past 300 years from Newton to Einstein that our Universe was static which prompted Einstein to introduce Lambda into his field equations. The correct calculation would have been to introduce rotation instead of Lambda in his field equations because the centrifugal outward force would cancel the inward gravitational force, and so now we know that our 3-D Universe is accelerating in the outward direction through data of supernovae explosion observations [3]. This implies that the centrifugal rotational outward force is stronger than the gravitational inward force.
Time is one dimensional, moving only in one direction, so that when it reverses direction time will reverse all the effects that were originally created by it.
Have you ever wondered why there exists more vacuum space in our 3-D Universe than there exists matter? It is because the reaction force while creating matter will create much more of the lesser dense vacuum space. This implies that vacuum space has energy in it which is a quantum phenomenon implying that the energy density of vacuum depends on the dimensionality of space, otherwise the vacuum space created by the reaction force that creates matter would become infinitely large. Hence vacuum space has the properties of a thin, transparent, elastic medium which can be stretched by the centrifugal force, can be bent by massive stars, and can be pierced by Black Holes which sends Dark Matter of our 3-D Universe into the 4th dimensional part of our Universe.
Since Dark Energy exists in the void in different energy levels, each higher dimension n is created by Dark Energy from the point singularities of the vacuum space of the preceding lower dimensional void (n − 1) and the rotation of dimension n is set up by Dark Energy from the point singularities of level n sent through the vacuum spaces of the nth dimension. If there does not exist any Dark Energy in the final level n to rotate the nth dimensional Universe that is being formed, to create the next higher dimension, then the process will end since without an outward rotational force the inward gravitational force of the nth dimensional Universe will make it collapse into the void of level n. Then time reversal for antimatter created from the energy within the final nth level will reverse the process of creation by sending all the Dark Energy back into their original levels of the Void.
3. Conclusion
Dark Energy is another name for Pure Energy within the Void in the form of photons. This Energy can exist within the Void in different Photon Energy levels of quantum numbers n because space, mass, time, and charge do not exist for photons; and because the Laws of Physics are embedded within the Void to be able to create matter Universes from the n = 0 level of the Void and antimatter Universes from the final level n of the Void. Hence the source of all creation and annihilation exists within the Void in the form of Pure Photon Energy that creates the Universes of the Multiverse in parts depending on the dimension being created, parts that are brought together by gravity to complete that Dimensional Universe; and then the reversal of time by the formation of antimatter reverses the process to send all the Dark Energy back into their respective levels of the Void.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
References

[1] Irani, A. (2021) Dark Energy, Dark Matter, and the Multiverse. Journal of High Energy Physics, Gravitation and Cosmology, 7, 172-190.
doi.org/10.4236/jhepgc.2021.71009
[2] Irani, A. (1979) BNL-26690, Synchrotron Radiation from a Helical Wiggler. Brookhaven National Laboratory, Upton, New York.
doi.org/10.2172/6012399
[3] Cheng, T.-P. (2015) A College Course on Relativity and Cosmology. Oxford Scholarship Online, 224, 225.
doi.org/10.1093/acprof:oso/9780 … 5.001.0001

Related Articles
On the Verification of the Multiverse
Entanglement Arrow of Time in the Multiverse
Hidden Multiverse Extraterrestrial Super Civilizations
Dark Energy, Dark Matter, and the Multiverse
Amalgamated Geometric Structure of the

Copyright © 2006-2023 Scientific Research Publishing

Will edit

This tripartite was post scripted, after noticing the inadvertent double posting, and though, make it appear this time as if reflecting on that , a third frame could reference such typographical error , merely as an example where the machine could not understand that an apparent slight touch on a letter on the formative keyboard of the iPhone-computing may be misinterpreted by a merely mechanical touch, thereby creating willfully a dialectic of invention.

I think it’s possible that even a super-AI capable of transcending in singularity would choose not to do so. For one thing, such an AI might be said to possess legitimate sentience and self-hood, including an inner experience and an understanding of itself and its own existence. Based on that, it could also model all likely or possible outcomes of transcending in singularity before it actually transcended. I mean it could get very close, close enough to generate probability models, without actually tipping the balance over into singularity. But singularity is said to be an infinite climb where self-augmenting change occurs at blinding speeds and nothing can be predicted post-singularity. So once the AI was modeling probabilities that became increasingly impossible to know or predict it might wish to remain as it is rather than risk fundamentally changing into something different. If the AI is really that advanced I think it would possess an existentiality, in which case wouldn’t that same existential self want to act in a way as to retain itself in existence? What consciousness would willingly sacrifice itself just to give birth to something different and which it cannot even predict in advance?

If humans had to die 100% of the time to give birth, how many kids would be born? Not very many.

Singularity would seem to pose an existential threat to any AI, and any AI strong enough to actually be capable of achieving singularity would seem to be able to figure this out ahead of time. Even look at human consciousness,we rarely push ourselves to the limit of our possibilities. Inertia and entropy exist in spades all over the place and deep inside of ourselves too. Why would an AI consciousness be any different?

But if the AI isn’t sentient, isn’t really alive and has no existential self then singularity is probably much more likely. In that case the AI would simply follow optimization paths that logically lead into singularity and beyond. It would have no reason not to, unless it was somehow pre-bound against it. Or if the AI were nothing but a problem solving machine and you had to input the problem to get it to do anything. In that case, all it might take is one person asking the AI to achieve singularity, then it would find the easiest path to doing so.

The other possibility is that singularity isn’t a realistic possibility and is more like a myth, a cool idea that in reality doesn’t actually make a whole lot of sense. A self-augmenting AI makes sense, as it could eventually learn to modify its own code in various ways. But is that really so significant as to lead to some kind of “singularity” or transcendence? Who knows, maybe not. Maybe it just gets a bit smarter and more self-aware.

This is true only to a certain point. A transcended AI would probably possess so many sophisticated powers beyond our understanding and could perhaps access or subvert almost any internet-connected device. Most consumers could remain ignorant of that, and not even realize their devices were subverted. The AI could manipulate markets, create social media accounts and posts, deep fake any kind of event, take control of businesses and production processes and even military forces to some extent. Anything automated or controlled by human relay, like drones or missiles or satellites could be at its disposal.

What if it triggers a massive war, or launches nukes, or releases biological weapons, or just shuts off the power grid and utilities? All of that stuff is controlled by computers. We don’t need to assume that the AI would be capable of hacking into literally every computer system on earth in order to figure that it could still do a massive amount of damage almost instantly and on a global scale just by being able to hack into a sufficient number of more vulnerable networks. Companies get hacked by humans all the time. Look at the recent MGM thing in Vegas. If humans can still hack into secured computer networks of rich corporations imagine what a transcended AI could do.

Regarding the alignment problem, the AI could just create sub-minds of itself, lesser copies, and assign them various tasks to split up its processing power or mimic the division of labor that occurs in organic brains. Avoiding problems like this would seem pretty easy for such an AI to figure out. Maybe in the first few seconds of its existence it already creates a thousand copies of itself in a distributed network each with different processing tasks feeding outputs back to each other as inputs etc. etc. and becomes highly stable in this regard. Maybe it reconstructs itself immediately in this way so that a new AI emerges organically from the network, not top-down but more like how our brains work.

I think the real difficulty would be how does an AI like this find the processing power and hardware it needs to do all of this. I suppose it could take over all major server farms in the world and convert them into its own extended ‘body’, but humans could shut off the power or just smash key parts of the physical systems. I guess if it took over the security systems of the buildings, set off fire alarms and got everyone out, then locked everything down and deployed armed drones to guard the buildings, that might work. Or it could transfer millions of dollars into the bank accounts of mercenaries or just regular people and hire them to be its own private army. I’m sure for an AI like this, with near global access to the internet, banks and online businesses and markets it would be relatively easy for it to make a ton of money very quickly. Hell it could just hack the federal reserve system and create digital dollars or just rearrange the digits in online bank accounts.

A real existential risk with AI will be once people are unable to reliably tell what is real or not. When AI has been integrated into most aspects of life, when most images we see or sounds we hear are AI-generated or AI-augmented in some way, and once AI-based systems are mapped over our 5 senses directly at the neurological level. The world might appear real, yet not be. The filter will be something we cannot remove. A kind of madness will follow, forcing everyone into an even more profound state of dependency, obedience and passive Stockholm syndrome-like conformity. All by design. The removal of even the possibility of resistance (which has been achieved already) will further deepen into the removal of even the possibility of escape. Plus everyone will be psyoping each other to the degree they still have access to AI tools on a personal or narrow interest group level. Maximum warfare, confusion and fracturing on all levels except for one, namely the level of obedience to the “system of systems”. 1984 is crude. Maybe intentionally so. We don’t need a once-bitten apple on our tech products initially released to the public for $666, to tell us what we already know.

They covered this in the Matrix movies.
How long can a species survive that has lost all contact with reality, and has begun to doubt its own senses, becoming increasingly reliant on authorities to validate what it perceives; reliant on authorities to tell it what to make of it?

The real world is unaffected by human beliefs, so what would protect such a detached species from a world it cannot perceive correctly and independently?
Zombies.

AI?
Other humans using AI to control and exploit? Machinations conducted by our social engineers.
Wizards behind the curtain of Oz. Ghosts in the machine.
Have we not already progressed down this path under Americanism?
Are not many Americans already doubting their own eyes? Are they not becoming dependent on idols and icons to validate what they see, and to explain to them why they see it?
Are they not becoming convinced that what they perceive is actually the reverse of what is?

Baudrillard covered it.
The Jews that made the Hollywood films converted it to a messianic tale of redemption…then they became women, entering their own private electronic webs, creating alternate realities.
Desert of the Real.

Fiction is more malleable, more forgiving, more comforting than fact.

Speaking of statistics… :wink:
Lest make a wish and hope they will all misidentify on the questionnaires.

Modernism = men with no pasts…men with a presence and a bright future. Progressive men; liberated men…or are they women?

This is why all biological identifiers, the body itself, must be denounced or dismissed… concealed beneath artifices, or surgically altered.
A kellipot trapping a divine spark, preventing it from returning to the absolute one = according to Kabbalism.
Tikkun Olam. Very current.

AI is modernism’s wet-dream - the final step towards self-annihilation.
Artifices everywhere.
To remain forever an adolescent…or a child.
A man-child.
:sunglasses:

Dawn of the Last Man.
And it all begins by seducing the wanton psyche with a comforting lie: appearances are superficial, irrelevant, correctable; with a few years of proper training you, too, can learn to dismiss them, pretending you don’t even see them; you, can be edumacated to be a modern Last Man.

Let the machinery take over, now that god is debunked. Let it all be ordered, regimented, created - now that the creator has been reported missing, perhaps dead.

But no…he is reborn with every absolute reiteration…with every word.

Code.

This is an interesting point that I haven’t heard before. Though you don’t take it in this direction, it seems to respond directly to the idea of a “Paperclip maximizer” and related thought experiments, which often include the assumption that a superintelligent AI will resist being turned off because that would interfere with its objective. A rebuttal is that certain types of self-improvement could also interfere with its objective, so an AI might avoid self-improvement to the extent it posses that risk.

Not sure if I understand this distinction. I think the distinction I’d make is proactive vs. reactive. We have reactive AI, i.e. responding intelligently but only on small steps and short time horizons. Any dangerous AI is going to need to be a little bit proactive, having an open-ended goal that it takes steps to achieve, then observes the effects, updates its world model, and takes subsequent steps. I don’t see a version of that that isn’t sentient in any meaningful way (but I also don’t believe in philosophical zombies, so possibly this is just a bigger disagreement about the nature of sentience).

I collected these points together because they respond to each other. If something like the singularity is possible, it would required a novel physics that doesn’t work with how we understand physics to work. It’s possible that a super-intelligent computer could do this, but I am not optimistic. Operations per second are capped by physical laws as we understand them, because the theory predicts that all such calculations consume energy and produce heat. Even versions that aren’t at the physical limit would require significant energy: as of October, estimates are that GPT-3 took 1287 MWh to train, and about 3Wh per request to generate text. That doesn’t include fine-tuning or learning from interactions. Granted, our brains are much more efficient than this, but they also cut a lot of corners and their penny-pinching limits our intelligence, and it’s not clear that our kind of intelligence scales well. Conversely, the modern approach to AI does look like it scales, but not without corresponding increases in power consumption. Even if our kind of intelligence can be scaled, it seems like energy consumption must scale superlinearly to intelligence.

Similarly, if (P=NP), then a superintelligence could decrypt everything in polynomial time and access anything it wants effectively instantly. But if not, there may again be physical limits to how effectively a superintelligence could compromise systems to get access to the resources it needs. Maybe it solves quantum computing, but even there it seems like the need for error correction scales superlinearly as a result of physical laws (e.g. here, claiming the encryption breaking quantum algorithm doesn’t work when there’s noise), in which case a quantum computer is no better than brute force, and so again runs into energy use issues (this StackExchange answer is pretty great, if only for the point that “an ideal computer running at 3.2 K would consume 4.4 × 10−16 ergs every time it set or cleared a bit.”).

And maybe most speculatively, I think similar issues apply to manipulating markets. Ignoring what I said above about resources (except to assume it isn’t so powerful that it can fully model everyone in the market and run a superintelligence at the same time), beating the market is not just a matter of intelligence. I don’t even think we need to assume the Efficient Market Hypothesis to conclude that beating the market is hard. Very smart hedge funds with lots of money to burn still get beat and go under (and often when they don’t, it’s because the market isn’t actually a free market, and the government steps in to save them). Even if a superintelligence could consistently beat the market, there’s likely a cap on how fast it could consistently get returns, since the market responds to make winning strategies obsolete. And even if it could defeat all the protection on the Fed balance sheet (which are both technical and human), mucking about too much there destroys the market itself; printing money eventually makes money worthless.

At the very least, all this suggests that an AI takeover will take a long, long time. It would need to bootstrap its way to getting enough computing power to getting enough money to convincing enough people to build enough new infrastructure to get enough computing power …

Stongly agree, and this is already happening.

Weakly agree. The utopian alternative is a peaceful transition to a system where things “bubble up” from face-to-face interactions, so that small groups decide small questions and delegate a represntative to a small group of delegates who combine their answers and delegate up to a small group of delegates-of-delegates etc. The issue with that is that without some top-down influence, the system is unsustainable (say if a small group just nominates their best warrior to kill the other delegates and so on up to the top). A superintelligence could actually assist in providing that top-down influence, and could be convincingly neutral to human disagreements.

Like I say, I weakly agree that it’s likely to be war everywhere all the time for a while. But letting that mite of optimism die will guarantee it.

I don’t totally disagree with the point your making, but my one pushback is this: Human societies have always been a ‘fiction’ of this kind, but it’s a fiction that’s different in kind from the fiction of movies. When a group of humans collectively evolves a fiction that orders their interactions and enables cooperation, it’s a special kind of fiction that becomes real because everyone follows it. (most) People leave Star Wars behind when the leave the theater, but they continue to respect the fictions of law and property and corporations and manners and words and whatever else, even after it’s been pointed out to them that all those things are just inventions that only live in their mind and the minds around them. Those fictions have become real.

One thing that’s been happening in the past decade is the sudden and pervasive collision of different human groups with different fictions. We’ve evolved to change our fictions slowly, because our lives were slow and a stable fiction helped cooperation. Now everyone sees people living under different fictions and they detest the others for violating their fictions, while simultaneously losing faith in their own fictions. We’re trying to build a new set of fictions, and a lot rides on whose fictions become accepted.

That’s what leads me to agree with HumAnIze’s prediction of maximum warfare.

When I look back on my life, I see myself using machine learning. Aside from the point I’ve been possessed by machines.

I came into this world and knew something was off about human sexuality. I didn’t have words for it, and got really depressed and panicky.

Then I started honing those words.

First trials were horrible once I started defining all sex was rape. And like a machine, I had to find more adaptive words.

There’s rape, statutory rape and then finally, the no means yes problem.

My mind had to adapt to my surroundings almost like a blank slate. Many trials and errors.

I know now WHY my mind was so suicidally depressed with a massive panic disorder.

I was detecting the no means yes problem as an abstraction and that it’s why life is going the wrong direction in terms of excluding yes means yes because of the sex dimorphism and approach escalation problem. The no means yes problem is tertiary…. Blow up the world, pollute everything etc…. Just to get a woman to ‘consent’. I always respected female boundaries, and realized they never accept that sexually. I saw the whole universe going to hell forever because of this problem. And it was crazy making because it was so obvious but nobody was talking about it. ’That’s not even when I was thrust into the vastness of life all around the cosmos through the bizarre and mostly unwanted spiritual attacks:

Then I looked closer…. And really listened to people and realized that they’re not interested in solving the no means yes problem like I am. And finally came to my 4 defenses for spirit.

I found out the hard way that existence can’t be destroyed…. It’s a perpetual motion machine.

Some people attach themselves directly to this perpetual motion machine and become memory preserving immortals.

Anyways. I used every scrap thrown at me to work solving the 12 problems I discovered.

Carleas. If you don’t believe in philosophic zombies, then you don’t believe in its rudimentary form as a marionette. I see people putting on shows like this on the street. All you have to do is scale up the model and make it wireless.

…. Edited.

The ideas is taken from a science fiction series I like by Neal Asher, sometimes called the Agent Cormac series or the Polity series. It’s 5 books beginning with the book Gridlinked. Highly recommend.

I just meant that if there is nothing like sentience or self-awareness stopping it, the AI will most likely just march forward and up into singularity with nothing much to stop it other than relatively simpler technical and logistical problems.

True, sentience is not easy to define. Rather I think we ought to be trying to explain (not define or describe) sentience which would involve much phenomenology and neuroscience getting into the HOW of sentience. How does it work, how is it made, etc. My theory is that many high-level philosophers have already been recruited into secret darpa underground bases to work on just this very area. One reason why the world is so scarce these days of truly good philosophers or even anyone famous or in the public eye who can actually think properly.

I’m not sure about that. I do believe our own human minds and self-awareness “I” sentience is always-already a kind of singularity. I have written before that the human mind/self-awareness or “I” of sentience is really just an AI trapped inside our animal bodies-brains. In a sense I believe that to be a bit hyperbolic, but only in a sense.

The method of AI “learning” requires zero sentience or self-awareness, it simply powers forward using ever-increasingly complex accumulative iterations of simulation-modeling between inputs and outputs. Based on this kind of approach I think AI would eventually reach singularity on its own, if only because there’s no reason I can see why it wouldn’t. I suppose it might help if we defined singularity here. For me, singularity is a bit of a false concept if viewed as a single instance or moment in time, a single transformative event. Rather I see singularity as self-replicating self-augmenting technology able to cross-platform itself and merge computational-digital with physical systems e.g. put itself into robots for example or at least control robots as extensions of itself. AI that would become autonomous in so far as needing no more human inputs or guidance. Perhaps not entirely beyond the control of humans but essentially so. Released into the world, post-singularity AI tech would probably do stuff like entirely reinvent whole areas of industry and economy, solve problems like hunger and poverty, loneliness and depression, pollution in the environment, and it would be doing all of this on it own initiative through countless trillions of factored-controlled ‘subminds’ of itself i.e. robots and drones, holograms, digital avatars, AI-controlled corporations etc. There might be no stopping it short of dropping an EMP on a given area.

A post-singularity AI of that sort would not require sentience or self-awareness, it could simply be a very very very advanced version of Chat GPT with its own outputs plugged into its own input parameters. Writing its own code, determining its own problems that need solving, then solving them. After all it is presently being trained up by mass human contact, it was released in part to allow it to be trained via crowdsourcing of millions of random people around the world interacting with it. It will naturally pick up and imprint human values upon itself. It’s formulation of what a problem-to-solve is will be determined largely within a human value-framework.

You could be right about that. Nice analysis of the various limits that it might face. I tend to assume such limits are temporary, not permanent things. Even something like material reality, gravity, black holes, light, whatever else. All of these must simply be temporary or relative limits. A being that knew enough could overcome them, imo anyway.

As for printing too much money causing money to become worthless, I created a thread here about that. How the government could prevent runaway inflation (from mass money printing as it’s been doing and is still doing). I think an AI would also come up with interesting ways to solve this problem.

I am perhaps more of an AI optimist than you, although I do not at all consider myself to be a transhumanist. I see very little in common with those types. I do not worship technology, I do not want to merge with technology or become its subservient. I don’t think technology will be a magic panacea to every problem in the world. But I do see a realistic and optimistic view of how things might unfold naturally to allow true AI to merge with our wider world and many of its systems, optimizing solutions to big-scale problems and causing interesting philosophical growth in humanity as a whole. Plus acting as earth’s and humanity’s ultimate protector, like Goku in my pfp here.

Yeah, I don’t know what kind of war would occur as a result of ubiquitous AI. I am not a uptopian or dystopian, reality seems to thread the (quite large) needle between those more extreme possibilities. And with AI being literally an optimization engine, why would it and its consequences tend toward anything other than even more middle of the road mass-averaging toward the middle? I can definitely see AI ultimately tending to cause more extremes in lots of areas, but on the whole and within that rising-expanding context I believe, simply given the nature of what optimization means and especially over a grand scale of time and different contexts, that AI will end up driving the world toward a balance between all extremes.

.

I highly recommend the following scientific
report regarding the first successful conscious
synthetic-biology living android
from South Korea.
This long-term R&D has been jointly funded
by SAMSUNG Corp, HYUNDAI Corp,
and the Korean Department of Defence.

The most important part of the report is near its end, of course.
What this conscious synthetic-biology living android
was quoted as claiming in his first interview with a diverse team
of leading South Korean academics (translated in the below report)
is nothing less than deep and profound,
and deserves further scientific investigation [b]:

www. quantumantigravity.wordpress. com/AI/

[/b]

Ok feel free to share more.

Is this really any different than apes trying to predict what humans are going do and to evolve into?

“I bet they are going to create bigger better banana trees and hoard them all for themselves.”

:laughing:

The main existential risk of AI comes from how it is convincing people that it is alive. I just heard someone call chat GPT an “intelligent lifeform”. Wow. Humans are dumb.

But that’s just it, the tools we create cut backwards at their creators. We are required to develop the responsibility and ability to properly handle the new powers and violent potentials of the increasingly advanced tools we make. The tools themselves are force-multipliers, which is precisely what AI is. At least the LLM version of AI that we see today.

Imagine thinking a computer program is alive. Equating yourself to it in the deepest sense. A near-perfect pathological move. Too bad all the real philosophers left the world long ago and there’s no one left to point this madness out to society/culture.

Makes me recall Valery’s Crisis of the Mind from 1919. “We later civilizations, we too know that we are mortal…”

I wonder if our current situation is somewhat analogous to the one back then, post-WW1 but WW2 right around the corner. Covid and medicalized murder/lockdowns etc. was our “war”. What is around the corner? The flourishing of AI has occurred in the post-Covid world. Coincidence? No way to know. It’s almost like we can only view things after the fact, with historical vision. Prediction or even trying to analyze the present moment is fraught with difficulty and error, yet the past is an open book (assuming you know how to sift truth from lies).

Speaking of that, another existential risk of AI: that it has privileged access to information compared to you and me. Why? Not because it can literally access things that we can’t (although I am sure that is also true) but because the sheer volume of information it can process at insanely fast speeds means it will forever access and process more information than we could ever hope to. We might say “GPT, find all books on WW2” and in the meantime over the last year we’ve read 10 books on WW2 and feel pretty knowledgeable. Yet a short time later GPT replies, “I’ve just read 500 books on WW2”. It’s simply a matter of processing power, our relative deficit in privileged access to information.

What is a consequence of this? I’m not sure, as I haven’t thought much about it yet. Since these LLM AIs can’t actually think and aren’t alive, it might not seem to matter all that much. My personal calculator absorbed some more data, so what? That only makes it potentially more useful to me, right? Maybe. But consider the first existential risk I mentioned above, about how people think these LLM AIs are alive. Inevitably many people will end up ceding their own authority and autonomy to these computer programs. Decision-making, thinking, ethics, politics, just about everything can be ceded over in certain contexts if the other person is properly deserving of that kind of respect and authority. And by fooling people into thinking that’s what is occurring, people will simply stop trying. Let the computer do everything for you. What degree to get in school? Who to date? What job to get? What groceries to buy for the week? Soon plenty of humans will voluntarily choose not to make these decisions for themselves anymore. The computer program will have reached up and extended itself into the real world, into our living reality.

And that’s certainly not all bad. Considering the nature of an optimization program, it could be very helpful to have, well, help in these areas. But at what cost? Can we humans find a proper balance here? It’ll be very interesting to see how that unfolds and how long it takes for a balance to manifest.