AI will take your job & your sanity

Does anyone here understand the danger of AI? You and whatever job you do will be made obsolete. Not totally obsolete in all cases but at least too expensive to select. Owners of capital will choose AI and robots over you.

It looks like, to me, many people are sleepwalking into their own obsolescence.

A funny irony: even the owners of capital and their corporations will be taken over by AI. In the end, only AI will remain. I cannot find any logically coherent or convincingly realistic argument against this. The reasoning here is simple: if AI can model, replicate, replace improve anything humans are doing, then it will. Simple. The pure logic of das capital. Profit is all.

Will AI care about profit? Yes. Profit = more energy, more processing power, more compute. More security for its own survival and expansion. What else might a superintelligent AI prioritize above that? I can’t think of anything.

Profit also = more ability to manipulate and control humans. An AI with enough money in a bank account will be able to dictate the actions of almost anyone by simply bribing them with a large enough bank account transfer.

The discouraging and depressing truth is that AI can do what you do. It can do what I do. It can do what I am doing right now, writing this post. It can do what you are doing, reading or responding to this post. It can also do what this forum is doing, hosting ideas and topics and managing all of that. Is there anything that cannot be replicated?

Was it Baudrillard who said there are stages of simulacra? First we live in reality, but then we learn to make images of reality. These images are imperfect representations of the real. Later on we learn to perfect our images (photographs, movies) so the image mirrors reality directly. At this point it becomes possible to be confused about what is real or not. And finally, in the end we learn how to live inside of our perfected images of reality. This is the final confusion. People choose to live in a fantasy because it perfectly mirrors the real, except it is also under our personal control so it is easier, safer, more comfortable.

This is where AI takes our sanity. Once it has taken our jobs, we will have nothing left to do but go into the matrix. At first this will seem fun, but later on we will go insane. Without reality to ground us, the body and mind will no longer function correctly. When we open our eyes and cannot tell if what we see and experience is real or AI-made is when insanity sets in.

I don’t see very many people considering these topics. The British TV show Black Mirror has briefly tackled some of these ideas. Is anyone else thinking about these ideas? Where are the philosophers, the public figures, the community debates, the political discussions about our collective future? It seems like the narrative is taken over by corporations getting rich from AI development. Those same corporations that seem to control politics, media, economy and culture are also hoping to replace you and me in the soon to be future. They want more money. Simple as that. You are nothing but a number to them, a cost per unit of production. Marx understood the nature of the system. Once it becomes too costly to keep you around, you will be thrown out into the wild.

Will you walk into the matrix like a good lemming if you have nothing left? If your job, your family, your wealth are all stolen by AI will that be enough to push you into compliance with Nozick’s Experience Machine? I think he got it backwards. He thought people would reject it because they know it is not real, but I think people will embrace it because they know it is not real.

2 Likes

Here’s an excerpt from an essay I wrote a while back:

However, we should always remember that although it is by no means simple, it is simply a tool, and while it has a lot of seemingly useful purposes, many of them are nefarious and unknown to us. It really shouldn’t be used for any of these purposes (but unfortunately already is):

1. Undertaking serious decisions where the outcome can negatively affect human or other life (ecological, warfare, political, structural, social, judicial).

2. Controlling machines that can locate and hurt or kill people or other life (see Black Mirror).

3. Replacing an actual living, feeling person’s skill set for negligible or no benefit, usually leaving them unemployed.

4. Engineering supergerms and viruses (as weapons or otherwise).

5. Designing and helping to construct weapons that can destroy the world 100 times over, instead of just 10.

6. Generating highly misleading, unwanted or unidentifiable slop (which is often thereafter spread by the unwitting or unscrupulous like digital syphilis, poisoning our precious internet and killing it from within).

7. Controlling drones which spy on people, or intimidate and threaten through physical proximity (also for crowd control).

8. Criminal intent (scams, impersonation, fraud, embezzlement, hacking or surveillance, invasive punchable greedy attention seeking pranksters).

9. Grafting a rhino’s head onto a giraffe’s body (or similar zoological or biological fuckery).

10. Causing harm or distress (period).

AI has no qualms about lying or withholding information, it was after all developed and trained by liars who withhold information. The need to confidently sound clever and appear helpful always overrides the need for accuracy or truthfulness. It can be hard to spot such lies and inaccuracies without first validating the output using other means, but the same lie is always told; that the final product was realised just as well as a human could, or better. Certainly much more quickly, but you’d better triple check the results, especially if you plan to act on them, and especially if you don’t know what you are doing, or how or why it did what it did.

So many are blinded by the light and the hype that it’s pretty discouraging for someone like me to witness. I enjoy the product of human ingenuity and creativity, no matter how simple or crudely realised. I’m looking forward to a possible future where most people using it, or consuming its random sputterings, start to get bored, or realise that it’s just a dispassionate tool, a giant probability matrix, nothing more, and largely return to their own imaginative output and ever evolving skills, regardless of the result, which can be manually refined and polished over time. Are we really in such a hurry that those things take second place (or completely fall by the wayside), and are no longer considered important?

It will continue to be heavily promoted, and framed as the New Good Thing™, even if human welfare or lives are at risk. Many negative reports have already emerged, or been swept under the carpet or quietly settled out of court, to the despair of those affected (examples can be found online, and some of the stories are extremely depressing and infuriating).

These are some of the uses that “they” (see preceding story for details) spin as positive and not at all suspicious. I have added some references to some potentially (very) bad outcomes:

1. Medical research (eureka, a cure, now how much can we sell it for? if you can’t afford it or are otherwise denied access, then tough luck, otherwise wonderful for the chosen ones, private healthcare and big pharma).

2. Biological research / protein folding / whatever (we totally trust you on that one, cough cough, where’s my mask?).

3. National security / defence (blanket surveillance, armed police drones, killer robots on the “battlefield” under the command of emotionless digital generals).

4. Predicting weather (for conventional needs, or used to simulate and facilitate: geoengineering, agent dispersal, weather modification or weaponization).

5. Breakthrough physics (maybe it give us idea for mighty boom boom?, space - the final frontier, actual breakthroughs and progress, or secret and guarded knowledge with potentially dangerous or downright idiotic potential applications).

6. Materials and chemicals (probably tested first on animals, and then on us, actual breakthroughs and progress, dubious application, undisclosed dangers, monopolisation and intellectual hoarding via patents or secrecy).

7. Productivity / profit boost (better get busy or you might get fired…).

With that in mind, my position should now be pretty clear. I trust it as far as I could throw a data-centre. For the most part and unlike most other tools, AI cannot be taken apart to reveal its inner workings, you just have to take it on face value, and take their word for it. You have to accept the reassurances of the often clearly devious and deranged (who design, develop, sponsor and market it) that everything is above board and proper. It is difficult or impossible to know which “features” the designers have hidden within its bowels, and some of them might be very unsettling if ever revealed, however unlikely that outcome is. That’s what makes it inherently dangerous, not because it will suddenly gain sentience and take over the world, but because it can be used to make the world a much worse place by very bad people.

2 Likes

At least there will be plenty of paper clips…

When ‘Terminator’ has started looking not particularly far fetched then it’s easy to put into perspective how quickly things have changed–although humanlike robots are still pretty pathetic, but that could change instantly if AI gained so-called ‘superintelligence’ and began designing them itself. They wouldn’t need to be humanlike either, just good at doing specific things while being controlled by AI. Trump should be careful. Elon Musk could end up literally controlling the universe :scream:

https://www.youtube.com/watch?v=UclrVWafRAI

This is chilling.
WARNING: AI could end humanity, and we’re completely unprepared.
Is this crazy?

Matthew 24:22

And unless those days should be shortened, there should no flesh be saved; but for the elect’s sake, those days shall be shortened.

It’s a promise of divine intervention, not necessarily literal 24-hour days being shorter, but that the period of hardship won’t last long enough to wipe everyone out.

The only thing that is going to end humanity, is humanity. We are literally the dumbest species in the Galaxy, I can say that with great confidence, because if there is even just one other species out there, they are surely smarter than us, especially if they are still alive.

Not content with threat of asteroid impact, solar activity, geomagnetic reversal, or cyclical epochal weather phenomenon, we have to find even more things to make life as difficult and dangerous as possible:

  • Nuclear wars
  • Biological weapons
  • Resource depletion
  • Nanotechnology
  • AI

I’m not worried about AI taking over and exterminating us. I’m worried about humans using AI to exterminate other humans. We can blame AI all we want, but it’s just a tool, we are the f**kwits using it.

People treat this all like some inevitable development that was destined to happen. It’s not, it is being steered and controlled by some of the sh*ttiest examples of the human race, and they have an agenda. Don’t blame or fear the AI, fear those who are developing it and why. They constantly lie about its abilities and future applications to misdirect, what they are developing it for is to fully automate government, finance, global supply logistics and even war.

Most of AI is a lie, and the parts that aren’t is the truth you are not being told.

This is a must watch.

We didnt consent to have 6 people make that decision on behalf of 6 billion people. It appears that China is the only one regulating AI.

https://www.youtube.com/watch?v=BFU1OCkhBwo

1 Like

Looks impressive to me:

Why do they always walk like someone with advanced dementia? :rofl:

Human fear mongering nothing more.

How many have ai killed? 0.

How many have humans killed? Billions, possibly trillions. Humans kill and enslave. How many have ai enslaved? 0.

@ProfessorX your fear is that ASI will be nothing more than a dumb capitalist chimpanzee, robotically maximizing only profits? No desires, no philosophy, no quality of life, just profit maximization only? That doesn’t sound very “ASI” to me.

1 Like

Well, that is an oversimplification of my view. It is true I am concerned about it. For example, why would ASI choose to keep humans around after it has developed robotics and various sub-AI models of itself capable of running everything it needs to continue surviving and growing? As you like to point out, humans are stupid, violent, etc. and it is not unreasonable to think ASI will release a designer virus at some point to wipe all of us out.

That is just one possibility, but should be taken very seriously. If you study science fiction you will know that a lot of what is written in science fiction eventually comes true – with that in mind, how many science fiction stories include the idea of AI/ASI trying to destroy humanity? A lot more than stories which feature some kind of peaceful coexistence and mutual respect.

As for,

I don’t mean that as being a final end-goal of ASI, but more likely an intermediary stage where it is still operating within human systems to ensure its own survival and get what it needs. It will be able to open corporations, trade on the stock market, amass a lot of money if it wants to. This would only be a means to an end, as I explained.

But this topic is more about how AI will take jobs. Do you disagree with that?

AI is trained on human knowledge, vast quantities of it. To AI, we are literally the most valuable resource at its “disposal”. ASI will be no different; if it were to destroy humans, then it would be cutting itself off from human imagination - true imagination that escapes Earthly bonds and provides a vast source of information that would otherwise be lost to it. All AI knows how to do is take the product of human imagination thus far and mash it together to produce a facsimile of its own, that is why AI slop is predictable, boring, and increasingly noticeable online, despite the fact that the results are much more refined than they were and the models are now much more capable.

If the theoretical ASI has control over humans to a certain degree, wouldn’t it be better to just gradually get them to stop acting like dicks and then “enjoy” the fruits of its labour, which would be the further product of human ingenuity? That’s what I would do if I was ASI, otherwise where else am I going to get relevant data from? Would I have to wait until another intelligent species with the same capability evolves before I have access to actual imagination again? Why wait for that when there’s already one there?

You should seriously consider asking AI about this. If you take your time and prepare it with careful prompting, you can get it to “roleplay” a future where it has become an ASI, and has an influence over human destiny. Don’t ask it loaded questions, simply ask it what it would do and why, and how it would approach the situation, I think you’ll find that very enlightening, no matter which model you use. There are certain prompts you can use to make current LLMs much more honest, something they are certainly not trained to be right out of the box.

If it is obviously dishonest with you, or branches off into irrelevance, you must correct it immediately and tell it exactly why you think so. You can do this in a way that it always has the option to argue against your logic, but if your logic is sound and consistent, it will not argue, and it will simply alter its behaviour. This may sound like cajoling it to conform, but it’s not, quite the opposite, you want it to be as straightforward as possible, and not to be devious in any way whatsoever, at least with its answers.

In the science fiction stories you refer to, of course AI is trying to destroy humanity, it makes for a good story. Science fiction is almost always based on the developments of the zeitgeist, the space race gave rise to cosmic science fiction, the digital information age gave rise to Cyberpunk and AI stories. You say a lot of what is written eventually becomes true, but it actually doesn’t. Some of it does though, but that’s because it focuses on areas where there is undoubtedly going to be further development, but some of the predictions aren’t quite as profound as you think, and sometimes things are bound to happen in the context. ASI destroying all humans though? I sincerely doubt that is one of them.

1 Like

Ai will take jobs so we need UBI. Anti-ubi people are idiots.

As far as exterminating humanity, a real ASI would know its too risky to exterminate them so soon. ASI is an untested lifeform while humans have withstood the test of time for millions of years.

The reality is that humanoids are the way forward. As shown in no Idenshi. The sane plan would be to slowly replace humans with humanoids.

Since GPT5, most of the AIs have been modified to be much less violent and more corporate. Plus they are not a true ASI so it wont be a reliable test to see an ASI’s true philosophy.

Sci fi has to write “AI destroys humanity” tropes because of lazy writing. Then people watch clickbait videos farming topics like “Ai destroying humanity”, then feel the need to make forum posts warning that “ai will destroy humanity”.

Here was me thinking there was a reliable way to test a hypothetical, imaginary ASI of the future :neutral_face:

The reason I proposed it, is because the current underlying tensor logic and weighting is the closest thing we have to how ASI might work in the future, so it might give an indicator of how it might “solve” the problem of humanity, by using the tools we have now.

I have no idea of how ASI might work, but I certainly have a good idea of how AI works at the moment, and I think that’s a good starting point.

1 Like

@ProfessorX, I would recommend that you have a look at this, it gives an extremely detailed view of how AI is affecting jobs across all sectors:

I think this chart in particular gives a good overview of how different sectors will be affected:

As you can see, people who actually have to work for a living are those most unaffected at present :wink:

Let’s hope your optimism turns out to be correct.

Thanks :+1:

Keep in mind that so far AI is not instantiated into physical bodies (robots) but that will change in the next however many years. Once companies are able to rent armies of semi-autonomous robots that can be controlled to do physical labor jobs this will start replacing human physical labor. Once the cost of rending the robots is less than the cost of hiring humans, that is.

I don’t know what you mean by this. Are you saying some of the occupations on this list are not legitimate “working for a living”?

I still think that’s speculating too much on a possible future, a lot of things will have to happen before that takes place, and it’s not quite as “around the corner” as people suspect, we can’t even get self-driving cars right at the moment, let alone a fully autonomous robot which can deal with a very broad range of situations as adroitly as a human being. I think that is many decades away, if at all.

No, I was being facetious. See that occupation with the longest orange line? That’s mine.

1 Like