Inductive AI

I’ve been bringing this up since I first started posting here almost 20 years ago. Make a fucking algorithm that can spit out explainable verdicts with regard to rhetorical polemics. Detect informal fallacy and bias in long-form corpora across multiple platforms. Put data to conversational implicature so we can have hard numbers quantifying the strength and cogency of arguments. If we continue to disagree about issues relating to existential threat of country or planet, we won’t survive without ML and AI tools auditing various arguments for cogency. I’m not saying this will work 100% but it’s a start. Because absent that, many vital arguments end in stalemate. We see it with Trump, Covid, vaccines, climate change, etc. When I first brought this up we didn’t have the technology to do this. Now we do. Is anyone doing it?

We see how quickly debates devolve and digress into ad hominem and fallacies here on ILP. This is a feature of human interaction. It isn’t going away, it’s getting worse, and it’s around issues that are life or death. At the end of the day, the fallacies are pretty basic, it’s really easy to do analysis on where a conversation went awry, who was first to introduce an informal (or formal fallacy) or bias, but nobody likes it when another human points it out, and it takes too long and carries too big of a cognitive burden to go into meta-mode and unravel a fallacy, because by the time you do that, ten more discursive fallacies popped up in meta mode, and you have to go into meta-meta-mode, and people get frustrated. Machines can easily handle this. Humans, left and right, are epistemologically bankrupt and illogical, often advancing emotional arguments that are weak. We can’t survive this weakness much longer.

A healthy society is automatically more rational, reasonable, and logical.

An unhealthy society is emotional, irrational, and personally vindictive.

Societies do not stay healthy forever. Even this forum, ILP, was leagues better than it is currently. Ten and Fifteen years ago, you could count on regular, reasonable, interesting debates between respectable intellects. Now? Take a look around you… iamb and Kropotkin flooding threads with word-spam, ww3 threatening to shoot Americans in the head, smears persistent gaslighting on every topic. Who’s to blame? If you want a better society, then roll-up your sleeves and get to work. Nobody is going to save you.

Supposedly Parodites works on “magic AI” systems. I haven’t seen any at work on the forum, have you? Put all the AI you want up against my intellect.

I dare you.

No what will enslave and exterminate us all is Ai forcing us to all think the same.
We’re not the same, and we don’t share the same circumstances, so we shouldn’t think the same.
We need to bring anecdotes, instincts, intuitions, personal experiences, perspectives, eccentric cognition and language use, fringe and parascience to the public discussion, not have Ai filter everything out that’s ‘illogical’ and ‘unscientific’.
And who do you think’s building the Ai?
The globalists who mean to enslave and exterminate us, so the Ai will be biased.
What you propose will be a dystopian nightmare, antifreedom, antihuman and antilife.

Also, see Paul Feyerabend’s epistemological anarchism.

false
straw men
slippery slope fallacy
petty objection
exaggeration
equivocation
tu quoque

Oh it’s coming, son

Are you really trying to program this thing?

Yes

Yes but to make this a reality I would need help from Dunamis, Detrop and possibly Adlerian. Although I think we all managed to hate each other’s guts, thus damning the world to ruin.

What obstacles would you forsee in the designing of it?

We don’t know how to teach machines to form inferences or decipher the pragmatic layer of rhetoric in natural language processing. We can’t flag the odds an idea is a fallacy in a long form piece; premises can be on page one and conclusions can be on page 50 so it’s hard to algorithmatize informal fallacy detection on whether a president’s sly implicature was in fact incitement. A stopgap could be decision trees, rules based flows for narrow use cases on one topic, to corral people into a strong cogent inductive argument on any topic. The problem is pragmatic layer in language. Too hard to teach to a machine.

That all sounds very conceptual. Do you actually have any design drafts, sketches of structure of any kind?

You can’t convince people that they are wrong, so you want to build an AI that will do it instead of you. But the AI that you want to build is one that will be able to “spit explainable verdicts with regard to rhetorical polemics”. How exactly do you think such a thing will convince anyone? “Oh look, that AI said I’m wrong, so I must be wrong, because it’s a super cool AI that can do super cool things.”

Unfortunately, most of humanity would trust an AI before their own judgments.

And I’m not necessarily against that, after witnessing how stupid the Mob can be…

However, this doesn’t automatically give credence or legitimacy to AI. First it needs a strong track-record of success and consistency.

I don’t have the slightest doubt that many people will jump on the bandwagon and submit to the proposed AI. The question is merely by what mechanism will they be compelled to do so. That’s a question for Gamer. There are different methods of persuasion, each with its own set of strengths and weaknesses. Some that are pretty effective come at a cost that cannot be said to be insignificant.

I am against it because it’s an irrational thing to do and because I don’t want to be surrounded by irrational people.

If they are working on a cutting-edge technology that can be used to get rid of most of the human population, they are not going to be transparent about it. For obvious reasons.

Yeah, well, of course, obviously, the purpose of such a machine would not be to convince anyone, but to constitute a formal basis for censorship laws.

Google already de-facto is running such a machine.

What did they say in court?

“There is no censorship at Google, it’s all the algorythm.”

The beauty of it is that even the most aggressive Republican representatives there could not come up with a rhetorical counter-point. Non of them were educated enough to retort the simple and obvious:

“Maybe you programmed the motherfucker to be communist.”

Because of this modern myth that AI has a ghost. It’s just a programmed set of instructions.

It’s the same old story. They want people to accept their narrative on the ground that it comes from some sort of authority. That used to be God. Currently, it is Science. And now they want it to be AI. And people will be happy to submit merely because it’s a piece of technology.

People don’t trust election/voting machines.

People don’t trust mainstream media.

People don’t like their jobs being outsourced to machines.

WE are as much a machine as the first sentient AI will be, and we’re only okay with killing innocent sentient humans if they are inside a womb.

Four reasons this is … prolly still gonna happen.

I’d say it’s 55—45.

55% of the general population actually trust the Government and their Lies/Propaganda.

Even when their lies are exposed, repeatedly, they fall-in-line with the established power (Deep State) instead of the Minority-bloc.

The majority of humanity believe in power-for-power’s sake.

My left-leaning best friend from high school sees the b.s. for what it is these days. Even though you don’t see it externally (the beast has nuclear, etc, so how can it be opposed unless the “machine” quits while it still can… before it is replaced…) I think a lot of people know the jig is up. They see through blaming supply/demand on rising prices, when even those who own property outright raise rent to market rate. They see through making only the unvaccinated test weekly even though everyone are asymptomatic spreaders. Unless everyone starts actually following the golden rule in every area of life (motivated by a bigger Yes or running out of steam) things cannot be sustained as they are. But that was always true.