I’m a little bored at the moment (the main reason I ever visit these forums) so I pose you this question:
Should you always do what you judge is best for you?
Now, say you could build an infinitely powerful computer that could predict the outcomes of every action and, so long as those outcomes were quantifiable, tell you what action would produce the best outcome. Would you use it and always do what it said was best?
Would what you judged best be the same as what the computer judged best (in other words, do the things that can not be quantified hold so much weight that a computer could never truly tell us what is best?)
And assuming (regardless of your answer to the previous question) that the computer always told you what was best, what could be gained by not doing what it says?
I’d say you can always gain something from risk, even if you forsee some negative consequences, especially if you’re looking to temporarily cure that boredom of yours.
You may only gain in telling yourself, you’ll never take that risk again, but ulitmately that will save you something in the future, whether time or energy. Plus you’ll learn to make the best out of a somewhat bad situation, and you can analyze free will in the moment.
That depends on how you define yourself, or rather, whether you do so accurately.
If you did define yourself correctly, your actions would always correlate to who you were; if your actions don’t correlate to your definition of yourself, then your definition is wrong, isn’t that true?
No, because other people also posess meaning and, concordantly, are morally important. Utterly selfless sacrifice is sometimes the right thing.
Two things get me about this:
“Best.” Anyone who’s not a consequentialist would scoff at the idea that a computer could calculate the “best” outcome in every scenario. And even within the community of consequentialists, there is no consensus on what the criteria for deciding that something is the “best” course is.
“Quantifiable.” How do you quantify what is truly good, especially if you are a utilitarian hedonist like me, who sees pleasure as something inherently subjective and qualitative?
But, ignoring that and addressing the gist of your question, I’d say yes. I would use the computer to solve moral dillemas, and act on its solutions, because it would expidiently and reliably give me a moral outcome. And that’s good.
Yes, the computer you described could never truly tell us what is best, because, in my mind, what is best is decided by a subjective, emotional prefrence that inherently can’t be quantified. If, however, the computer was empathic (somehow) it could tell us everything that is good…
If you disobeyed the recomendation of the computer, you would retain your moral agency, which is something that is critical to the concept of “morality” seperate from the concept of “meaning.” In a universe where that computer exists, there is only one moral decision: to follow all of its suggestions, or to reject them. But you would also, inherently, do the wrong thing if you were to reject them, no matter the moral independence retained in doing so.
Yes! You are bored and so am I!..the reason for my reply!
Now, supposing that this all-powerful computer was God and I was Adam in the Garden of Eden and so on and so forth…
Being a shit kicker I’d need some messiah to redeem me for my attempted de-monopolization of God’s monopoly of knowledge or at least of my ambition to get there. Perhaps another computer that was programmed by the 1st computer - the onery one - offering such means of redemption through sacrifice.
The computer is only as good as it’s Programmer meaning…I think…that an all powerful computer would have to be created by an all powerful being making either one superflous. Erradication through equality.
Perhaps God must die for us to become immortal and make all the right decisions!