You can skip to summary.
This is concerning ultimates of democracy and dictatorship. Consider that (a) the masses aren’t really smart enough to function properly, (b) dictators aren’t reliable.
Suppose that for individuals, there’s a happy medium to be struck between truth and lies. People need so much knowledge to function, and they need so much ignorance to spare mental break down. I assume this is true in the most literal sense. (the brain carries only so much data). But it may also mean that truths must be distorted and even fabricated in order for people to move on. (They may lack both the emotional stability and the memory for the “truth”) But assume that we ultimately want the truth. Any failure of the truth can corrupt data, ruining our endeavors, and make us unable to pursue what we want based on our knowledge.
So to simplify, assume.
(1) Too much knowledge makes us dysfunctional as individuals.
(2) ANY lack of knowledge makes us dysfunctional as a society.
We want a functional society.
“We” and “us” can either mean all human beings strictly, or the set of all known sentients.
Is there a solution to this problem?
I imagine part of it goes something like this, regarding all sentients.
Some entity (computer, collaborative knowledge, etc) is driven to know everything as best as possible. By its own knowledge, it holds the mandate that others must be lied to under certain circumstance. This entity cannot make concsious decision (it would be sentient, and could suffer cognitive disorders), but it can analyze input about our common genuine goals and calculate/ponder wheather these goals are pursuable, and pursue them. It uses enforcement, political strategy, persuasion, etc.
The job for the rest of society is to make certain that this entity is designed for the purpose of pursuing these common genuine goals and not someone’s private or insane goals. Basically, we would never be to know exactly what it’s up to, only that it’s for our own good.
Basically I’m pointing out 6 roles played: (1) The machine government (it has no sentience, and is ordered by strict rules), (2) Lies (truth is too dangerous), (3) Civil servitude (people fix it and make sure it’s running). (4) No one gets in the machine’s way. (Or at least it’s amassed an army so that no one dares). (5) People DO make sure it’s being run properly enough to escape the control of some random idiot. (6) No one really questions the machine, aside from number 5. (It’s too powerful for the questions to carry any weight anyways).
Consider the elite governments of today, their mechanistic rules (which are frequently broken, nonetheless), the “black-ops,” and the layman’s limited understanding of it. Is that not the same predicament we’re headed for?
SUMMARY
Is it possible we can enforce a machine to act on all our behalf while knowing and controlling very little about what it actually does?