thinkdr
By not maximizing that fun to the heights but rather being aware that balance is important.
We can also minimize human suffering by also doing the above. As the saying goes, paraphrasing it, "The more we play, the more we pay.
Also, the more others pay as a result of our need for so much pleasure.
But do you see “human fun” as the end all? What about deep satisfaction which comes from achieving something worthwhile? What about the struggle and challenge which comes from attempting something? That can be, if not so much fun, still exhilarating, stimulating and afford one such a sense of deep qualia though the struggle and challenge is still there.
It is the seeking of so much fun by the hedonist which leads to disaster and chaos in the world.
…and who will be inputting all of the intelligence and wisdom into these machines to teach us?
But of course, someone like the android, Data, from Star Trek, the Next Generation, might be helpful.
But are there these machines in existence to teach things like INTELLIGENCE and WISDOM?
I know very little about machines but at first glance, I would say no. Where does the Consciousness come from which this supposedly, super-intelligent learning machine would teach or give us? From a human right - or would I be wrong?
I am not so sure, I do not intuit, that any machine can teach us love, compassion, wisdom, intelligence.
Machines are capable of inputting and outputting(?) facts - thinking like Sherlock Holmes how I do not know lol - I have built humans not machines.
I do not believe that machines get their knowledge from the ether or from the gods.
So such important things as wisdom, intelligence, moral and ethics, virtue - how can that come from a machine which only things and has no Heart?
That would depend on the individuals and their societies. I may be wrong here but I think that question is a bit too broad but again I may be wrong.
I can honestly say that the more I learn, the more I realize I know absolutely nothing about most things.
I may be wrong here but wouldn’t/shouldn’t the goals which are already programmed into AI learning machines already be aligned with human goals.
But my statement is a bit too simple.