AI Is Not a Threat

But more to the point-If the either or partaken all levels, then the artificial intelligence may be the agency to bind the natural a
determined World, and human consciousness. The bounding together, or overlap between them may continue-in a continuum from either in an absolute sense, or, in a relative, positive sense, depending on the level of the continua. (Either absolute-closed, or relative-open). That no satisfactory answer has been given in a positive way(Godel), is testament to Cantor’s genius.

I think this idea can be thought in an intuitive way even to children, without getting into why positivism is absolutely the inconsequential.

Otherwise, the real foundation of a philosophically based psychology knocks its own foundation out of the game, the language game of its own foundation.

This would prove once again, that the most complex representations of reality are the simplest.

Okay, but what I am trying to grapple with is in imagining an actual context in which we might try to differentiate an AI consciousness that is “manufactured” by us from the consciousness of flesh and blood human beings “manufactured” by nature [or, for some, by God] in which it is assumed that some level of autonomy exists.

Now, whether or not human beings or AIs have free will or not, the physical laws of the material universe would seem to be wholly applicable to both in the either/or world.

What I am trying to imagine, however, is a world in which communities of human beings come into conflict over, say, the means of production — capitalistic or socialistic?

Which one ought it to be in order to be in sync with the most rational [and, for some, by extension, the most virtuous] human interactions?

The AI machines might presumably face the same fork in the road. Would different AI communities clash over the same conflicting assessments?

Is there a way for either us or them to determine which means of production is the most reasonable, the most moral, the most in sync with entities like nature or God? The one that we ought to pursue if we wish to be in sync with the “ideal”.

In other words, is there a “higher” form of consciousness able to resolve what I construe to be conflicts rooted in dasein, conflicting goods and political economy. The seemingly intractable conflicts.

What makes the terminator dangerous to mankind? Well, in part, the fact that it can’t be reasoned with. There is no is/ought mechanism implanted in his program/intelligence. My question then is this: Is there an is/ought component embedded in the consciousness of the AI machines that created him?

How would the machine intelligence transcend this particular dilemma of my own:

If I am always of the opinion that 1] my own values are rooted in dasein and 2] that there are no objective values “I” can reach, then every time I make one particular moral/political leap, I am admitting that I might have gone in the other direction…or that I might just as well have gone in the other direction. Then “I” begins to fracture and fragment to the point there is nothing able to actually keep it all together. At least not with respect to choosing sides morally and politically.

Territoriality is just a single component of that which encompasses all existing things: subsisting. Existence itself.

And, in the modern world, that revolves around the forces that drive the global economy. And that revolves around securing the best of all possible worlds — one in which a nation is in the most advantageous position in regards to markets, labor and natural resources.

And here, as I have noted previously, many flesh and blood human consciousnesses reduce is/ought down to “show me the money”.

What might be the machine equivalent of this?

And then there is the capacity to either ponder or not ponder why anything exist at all; and why this way and not another. The really Big Questions.

Is that something the machines that/who created the terminator would be concerned with? Would their own presumably “higher consciousness” come any closer to actually answering questions like this?

Before tryin to flesh out in toto, the suggested problems as posed, initially I may take a stab at it.
In terms of motive and goal setting. Where the question posed as to, how the starting point, the presumed beginning-territoriality, and others- begin to be starting question, based on very literal terms, where there are not yet demonstratively figurative implications of questions dealing with finding differences. The motive question dominates the one dealing with goal orientation, whereupon the contexts within which those questions can be guessed at with more probability. It is premature, and the thought process is not necessarily, or primarily premature, because, traces of it subsist through, not with standing of temporality. When those secondary goals have started to filter through into the original motives, meaning, consciousness of connections between motive and goals start to emerge, then differences between them arise.

That, specifically between Capitalism and Communism is a good example. Before the emerging difference arose between them, before the economic forces , as the played effective markers upon the changing class differentiation, there was only determinism through subjugation and repression.
The contextual relativity between Being and Existence was never understood. Socio economic forces developed the concept, out of the one dimensionality of s primary apprehension of a force of repressed will, not available to a much later acquired consumer capitalistic democracy.

If you reduce such questions to an either/or of primary identification of the former, and try a contemporaneous differentiation based on that level of consciousness, the only cognitively possible analysis will be fused with emotionalism and intuitionism of predicting outcome and goal.

As the integration of these separated elements start to fuse, more and more developments need to be explained in terms of more symbolic content, as they too fuse with other more or less symbolic elements.

They do reach a point where, confusion on all levels starts to reign supreme, and thing are needed to start breaking down , or reduced to more understood elements.

AI then is probative in terms of finding meaning between the two poles of primary and secondary processes, and the only way IT can do it is to establish linkages with and within both: Hence the problem of differentiating meaningfully between the three:God, Natural and IA. The sources are the same, on that original level there is no doubt, but that the time, they were separated, seemed as if their origin was dissimilar.

Capitalism and Socialism also had the same source, and thus their goal was unknown except in very existential terms. The goal of evolution was not known since creationism required no goal setting, except in terms of the mind of God.

Now that God seems to be dead, we have to, or AI has to fill in thousands of years of this lack, with contentious and hotly debated reasons for existence. Now we can differentiate motives from goals, Being from existence, but the terms of such difference are yet to be defined.

Intentionality is as close to a new version of god’s plan as conceivable, it seems to me.

I agree that AI is not a danger as the science fiction works paint it to be. There are some concerns of course, but mostly on our human end of how we will react to AI, how its existence will affect us psychologically or diminish certain professions thus causing economic harm to workers.

We should welcome the existence of another sentient, conscious, intelligent life amongst us. I would personally love to have long talks with a true AI, it would be quite illuminating. And it would learn from us too… AI would up the stakes, increase the demand that humans intellectually fortify themselves away from insanity and laziness. Plus AI could help us achieve extremely advanced technologies. And act as a check on human corruption in politics.

On the note of sci fi, the best works that deal with AI realistically and philosophically/socially that I’ve seen are the books by author Neal Asher. I highly recommend them.

Do you really think that Stephen Hawking is that intelligent?

And if yes: Do you think that Stephen Hawking can and, if yes, will prevent the complete replacement of all human beings by machines?

That would not be bad. :slight_smile:

But would that not be the „wonderful world“ again that has been promised so often - by idealists and ideologists (by the way: by Keynes too) ?

That would be bad. :frowning:

Human beings and especially the Godwannabes among them tend to overestimate their power and to underestimate the power of other beings.

Anyone else notice the Superman logo changed? This is supposedly a Mandela Effect.

Hitler was “physically instantiated and therefore constrained”. So was Stalin. Neither was superintelligent.

Both managed gain a huge amount of power. Both caused damage, destruction and millions of deaths.

Neither had access to high speed communication, automated factories, robotics or a network connecting billions of computers.

There was nothing to worry about …anybody could get rid of Hitler with a pocket knife.

Some people were optimistic about Hitler and Stalin.

So what happened?

Why should people have been concerned? Why should they be concerned about AI?

The crucial point here though is the extent to which an “intelligent argument” is rooted more in Marx’s conjecture regarding capitalism as embedded historically, materially and dialectically in the organic evolution of the means of production among our own species, or the extent to which the arguments of folks like Ayn Rand [and the Libertarians] are more valid: that capitalism reflects the most rational [and thus the most virtuous] manner in which our species can interact socially, politically and economically.

Now, if a community of AI entities come to exist down the road, which approach would they take in order to create the least dysfunctional manner in which to sustain their existence.

On what would their own motivation and intention depend here?

Again, there’s the part embedded in the either/or world. Things that are true for all intelligent beings.

But: intelligent beings of our sort are able to ponder these relationships in all sorts of other rather intriguing ways.

For example:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

Don Rumsfeld is one of us. How then would a machine intelligence respond to something like this?

And that’s before we get to the part that is most fiercely debated by intelligent creatures of our own kind: value judgments in a world of conflicting goods.

Haha.

Iambiguous :

The question can be reduced to which part in deed.
Ayn Rand is diverting the course to a naive rationalism consisting of literally shrugging off any
otherwise prejudicial argument, which opposes facts posited otherwise. Embededness means a great deal for her, in terms of developmental process based on naive, common sense ,postulated on power and will
of politptical, social , complex didactical motives
based on so called human wants.

The evolutionary context within which human
understanding is grounded, in an either-or mentality,
to a certain extent, has been transcended, the will to power has been differentiated and reversed to a power to will. Needs have been overcome to effect
this differentiation, after all Marx has shown trend to
an eventual outcome by the materializations of the dialectic.

But has it? If so, it pertains to the either- or as well to its differential analysis. This has nihilized one, as it did the other.

This is what has been meant by the late comment on human history having divulged itself of utility in this respect.

Thus, AI will subscribe to the choice of the right value, as far as motivation and outcome are
concerned, by vitiating a code of moral judgement, without digressing toward lower level choices.It depends on the program of choice, either one that further de differentiates toward ascribed choices of ascertaining meanings of probability based on lower level probabilization of meaning, and give up trying to outguess more integrative functions of building
architectures of yet to be realized models , based on
survivability, or existential needs.

A recourse of values modeled no longer on the
outmoded wants of an economy of profit and gain of a
propaganda of expansion between wants and gains consisting of prioritized affluence, as newly emerging existential needs become diverged from the spurious wants, as Marx said it would.

Why? Because societies’ grasp of the promotion of values has become decreasingly devalued, and the
newly and dramatically negative expectations of
existence have become tenious.

AI can be progressively feed this reversal, and
relative value can be set in a series of input -output
calculi of diminishing expectations.

In other words, the material dialectic, presupposed to favor an ontological union as a result a artificial synthesis between a common sense union between architectural modeling , may view the emergence of a new model, not in terms of a union of both, but a pre-existing identity.

Therefore this equivalency is jut a retro look at divergence, whereas the basic unity of the model may be viewed as the primordial model, which has been differentiated in the only way possible: By application of fields of probabilistic sub-modeling. That this was based on revision, as in Ayn Rand’s case, is of no doubt.

Doubting this on an extended timescale is like building a house of cards, guessing as to the glue used may hold in that extension.

That an AI can be constructed to overcome the future of feasibility in this regard, is like reading tomorrow’s paper today.

The basic value of currency, can not to be fore cast, as with a kind of guessing game, how far inflation will de-value it to a point where confidence in it will be lost.

Confidence in the diversion of value of currency in society-may not be able to be made to coincide with the lack of corresponding values associated with it.

This is always the case with the modality of current-value, where drastic social change is necessitated by much too diverted and simulated correspondence of non equitable values. And is not AI basically an attempt at simulation?

ai can easily be a threat, if it’s hacked, and the moral filter are tampered with. but overall it might be like anything else, statistics and spin will calm people to see that rogue robots isn’t a great danger to humans compared to guns, train planes and automobiles.

I don’t understand the worry about AI to be that AI might one day be as dangerous as other humans, but that it will be specially dangerous to us. I also don’t understand the danger posed by other humans to be particularly well correlated with intelligence; I agree that neither Hitler or Stalin was a supergenius (though I’m sure they had their talents).

The concern I’m responding to here is the idea that, by its nature, superintelligent AI poses a special threat to humans. I concede that it may pose a normal threat, and that it may have its own objectives just like every extant intelligence we know of. But I don’t concede that this makes us at all vulnerable to an AI turning us all into paperclips or anything of that sort. Like human Hitler, superintelligent AI Hitler would have to recruit an army of like-minded individuals, each independent and physically instantiated. Given the current state of AI hysteria, it seems it would be easier to recruit an army of neo-luddites to destroy such a machine than for the machine to recruit real-world resources to its cause.

To the extent that I actually understand her, Rand presumes that human intelligence is able to be in sync with her own rendition of “metaphysics”. Including the subjunctive components rooted in emotion, in human psychology. Her philosophy is an epistemological contraption embedded in the manner in which she defined the meaning of the words she used in her “philosophical analysis”. It was largely self-referential, but: she does not anchor the is/ought “self” in dasein.

Apparently, she understood everything in terms of either/or.

How would machine intelligence then be any different? How would it account for the interaction between the id, the ego and the super-ego? How would it explain the transactions between the conscious, the sub-conscious and the unconscious mind?

Would this sort of thing even be applicable to machine intelligence?

Would it have an understanding of irony? Would it have a sense of humor? Would it fear death?

How might it respond to, say, Don Trumpworld?

But: What “on earth” does this mean? What we need here is someone able to encompass an assessment of this sort in a narrative – a story – in which machine intelligence thinks like this.

But: this thinking is then illustrated in a context in which conflicting goods are at stake.

In fact, this is exactly what Ayn Rand attempted in her novels. And yet the extent to which you either embraced or rejected the interactions between her characters still came down to accepting or rejecting her accumulated assumptions regarding the meaning of the words they exchanged. Words like “collectivism” and “individualism” and “the virtue of selfishness”.

Just out of curiosity…

Are you [or others here] aware of any particular sci-fi books [or stories] in which this sort of abstract speculation about AI is brought down to earth? In other words, a narrative in which machines actually do grapple with the sort of contexts in which conflicts occur regarding “the right thing to do”?

Conflicts between flesh and blood human intelligence and machine intelligence. And conflicts within the machine community itself.

In re: Your’ need for an assessment’, as that is the only one I am able to adequately reply to, at this time,
the idea was to point to the dynamics of a reversal: a projective-introjective turn around between a conclusion, or conclusions drawn upon the ‘reductive probability’ inherent within an either-or type thinking.

That is probably what is going on with Rand, to give a pseudo-psychological twist to basic understanding, a kind of simulated synthesis bordering on legitimizing both, in the grey area which needs more focus, if it is to succeed in more than a popularization of ideas behind the ideas.

A more succinct way to put it, is she is defensive in the basic psychological manner of corresponding to resemble communication as signaling to popular understanding, or as the positivists would have it, as regarding common sense.

Her psychologism is an appeal to that common sense.

That the above is only a yet to be filled shell in need of filing is obvious, which that will be provided shortly, within a day.

But the need to simulate the missing area with more clarity, as far as bringing together the nature of the psychologism with the dynamics of a general correspondence as far as logical consistency is concerned, -All-within the larger, scientific & pseudo scientific simulation between man and machine- is within one logical system (bubble).

Other bubbles, some incorporating others, some seen as exclusive of some, can relate to aspects of set mathematical certainties, in line with Cantor’s visualization.

Which is more determinant in as a function , or derivative, hinges , or is hinged upon, in any shown corresponding dynamic.

I get Your, or any one else’s conjecture about levels of presumptive or overt understanding of this simulated grey area, and it seems like, and I agree with You, or anyone else, that it has to be grounded.

There are probably a plethora of sci if books out there, the last of which I recall reading about was a fading super intelligent A-1, which is slowly loosing it., his IQ including emotive functions, due to failing studies relating it.

Will try to reference this.

I am sorry, iam, I could not find a reference, but other items popped up meanwhile, namely an interesting CBS report on an article I may mention in passing with the title, ’ Narrowing the GAP betweenhuman and artificial intelligence.

Finding myself as well, testedin regard to referents, and I am aware of the situation, of what MS Rand must have felt herself, kind of like having to express a rationale on capitalism, at a time, when after the red scare, following WW1&2, which were entangled with the ideological confusion of the inter lasting Great Depression; casting a huge albeit largely forgotten shadow of large proportions at the time.

Not without standing the fact that she was a Russian, turning Marx upside down, so as to cast the shadow in terms of the language of the light of day.

It is within these perimeters. that simulation coincided neatly with the message of the media, that also being the message, here, amolifying Your observation into the semantic games I referred in the above.

Here, the either-or of the prologue shifts into the center, the need, to reconcile in an abstract basis, the ore verbial impressions of an uncommon familiarity with how things in the real world of politics play out.
These impressions are catered to, in case of a revision, in Rand’s case, the very tumultuous and joyful days following VE Day.

There was caution in the air, by ultra conservatives, who felt a slide back into some kind of infamy, whether be it from reorganized National Socialist cells in Germany, or the re-emergence of the Marxian model to Worldwide Socialism.

The common sense approach which became the torchlight for the next few decades following, reversed both the politics and the social psychology of a reversed Marxism, where social gains can be attained, far beyond what a Socialist Marxism could offer. These were the arenas of real values in the 59’s, where social realism competed with abstract expressionism.

The competition achieved goals. Whether these goals were products of real reality, as people envisioned them when high times prevailed for those few decades, amounted to products of manufactured values for the most part, based not on uniformity, but differences in the West, and especially in the US.

Differences, implied self determination, based on competitive efforts in the West, and abstracted differences were sold as subjective wants were catered to, mostly out of Madison Ave. and Hollywood dream factories.

Social realism could not afford sharp differentiations, they were logically precluded from large jumps within a collective of alums within socially tested derivatives. They were derived by a historicism implicit in their architecture, that held strong for about two generations after the world wars. Nowadays, decay has set in, in spite of considerable efforts for reconstruction and maintenance of relics of the past.

That MS Rand had to patently import these mostly conceptual forms of architecture, made little impression on those for whom architecture was merely a figure of speech, implicit in the import of the various philosophies of language, that seemed to work on some kind of subliminal level, just like advertising.

Looking back, Carl Popper’s ‘s Open Society and Its Enemies’, seem more convincing as a conceptual tool to define a narrative, more inclined to form more than mere psychologisms as figures of soeach, then Atlas Shrugged.

But we have as a society have come full circle, now, with Terrorism opted as the new frontier of a new opening for a viable enemy, and basic values have become circumscribed within thus orborous of a closing circle. The center is not holding , that which is artificial can not be simply put into an either-or cast,
and it is no longer a question of whether it is real, or a simulation, but of what level of complexity such simulation consists of.

What are the goals or the motives of an artificial machine, for instance? Can Trump be really nothing else but a machine like entity, grasping at nothing but on winning? Winning for its own sake, to substitute for art for its own sake ?

Or the art of salesmanship may someday consist in installations of program trading on ever descending levels of demonstration ? Not as far fetched as it sounds, and who would care if such is not ‘real art’?

Defensive psychology is a prelude to the breakdown of bracketing formal arrangements, as the complexity of technological advances are made in modern warfare. Boundaries melt into each other, and as dissolution of the spheres of relevance and resemblance create new spheres of ambiguous and anomalous power structures, the change, according to the extent of their merger through relevance.

Changes can be slow or abrupt, and that is the result of a fortuiys application by opportune application of power motives. These usually are very carefully crafted, and made to appear as consequences of chance, for public consumption.

How do such things imply the same kind of dynamic in Trumpism , is uncertain, but that the correspondence with larger, historic movements are no doubt weighed in carefully. Therefore, without commenting on a Trumpism, per se, would hazard a very strong political channel which drives its course,
and not at all nearly as described so voulnerable to attacks.

That it’s a prescription, or is based on a prescription of a major reversal of values, there is little doubt, whereupon future historians may be able to comment, on how important a factor did A1 played on its progressive course.

That’s because it inherently lacks a connection with humans. This is similar to the way that humans respond with more fear and anxiety when confronted by reptiles than when confronted by mammals … they feel an understanding and control around mammals (which may be misguided).

But the threat is amplified by the fact that it can process tasks much faster than humans and that it never sleeps.

That’s extreme but I can see that a paperclip factory could “easily” produce negative results for humans.

I don’t see that as being particularly difficult. Monetary rewards and social engineering would be relatively simple for an AI to use.

I disagree. Once the machine is “out of the box”, it’s going to be hard to get rid of it.

Flash:

Facebook shuts for A1 experiment after two robots begin speaking in a strange language that only they could understand.- experts calling the incident exciting, but incredibly scary.

UK Robotics Professor Kevin Warwick said:"This is an incredibly important milestone, but anyone who thinks this is not dangerous has got their head in the sand. We do not know what these bots are saying. Once you have a bit that has the ability to do something physically, particularly military bots, this could be lethal.

This is the first recorded communication but there will have been many more unrecorded. Smart devices right now have the ability to communicate and although we think we can monitor them we have no way of knowing,

Stephen Hawking and I have been warning against the dangers of deferring to Artificial Intelligence"

The facebook robots Alice and Bob we’re speak only in English, but quickly modified it, using code words and repetitions to make conversation to each other easier for themselves-creating a gibberish language that they only understood.

That became possible through Google Translate, developed last year.