A moral agent is someone who sufficiently understands and is sufficiently capable of certain methods.
In this case, moral method applied to interpreted reality. This agent would be aware that behaving
morally was mandatory by the moral doctrine and ‘society’. Then he would choose, which is to say,
he would prefer.
I believe we can be moral agents without free will, but most believe moral agents have free will.
That’s one of the newer takes, that preference is enough for moral agenthood. I’m not sure it is. Disregarding the determinism question and assuming humans have free will, of course.
You also include something else, to which I must ask is knowledge, or general awareness, of the moral doctrine required for moral agency?
Why aren’t you sure? Preferences can be ignorant, but most of them are based on repeated experiences,
this are forms of knowledge, even if the knowledge might be very basic or primordial.
Common law says that we must know the difference between right and wrong before we have moral agency.
This is why we can plead insanity in court, and possably be allowed to not be punished because we were too
crazy to know what was right and wrong.
Well, it kind of contradicts what you’re about to say here…
This is why I’m not sure that preference is enough. Insane people have preferences. Dogs have preferences. But they’re not moral agents. Preferences might be a necessary condition, but they are not sufficient. It may be the case that all examples of moral agents exhibit preferences as an accident of some other feature of moral agenthood.
I consider animals to have their own little form of morals.
Their instincts are their law. What they prefer, and what they remember, they value.
If you have ever had a pet dog, you know that dogs can think a bit.
But yes, a preferance isn’t enough. One must undertstand consequence, and empathy, and law.
he must defecate outside, or else we held his nose in it and shamed him
so now when he has to go… he holds it, and gets our attention, so he can go outside. if we’re not there, he holds it, because it’s wrong to go in the house. he’ll hold it
occasionally, when we are all distracted for a very long time… he simply can’t hold it, and goes in the house… we don’t shame him as much for that
he understands that its wrong, or at the very least, understands that there’s negative consequences to going in the house
while right and wrong is perhaps a simplified version of morals… it’s how we’ll teach a 6 year old, and our dogs… and it seems to work
I’d doubt that very seriously. Let’s see where this goes…
You’re just personifying his behaviors.
Sounds more like your morals…
Which is not morality, else hunger would be an expression of morality.
But 6 year olds aren’t moral agents.
To respond more generally to your points, learning behavior is something done throughout the animal kingdom by many species. If you want to say that learning behavior is practicing morality, you might as well say that all knowledge is a moral issue, when it isn’t.
similarly to how my morals were imposed upon me, my dogs morals were imposed upon him
i don’t understand
P1. “he understands that its wrong, or at the very least, understands that there’s negative consequences to going in the house” is an example of morality
(insert logic here)
C1. Hunger is an expression of morality
go
sounds like you already have an opinion of what a moral agent is (or at least is not)…
I’m suggesting a 6 year old is a moral agent, and so is a dog
i’m not claiming that learning is practicing morality
i was suggesting that morals themselves are learned, the [contemplating] behavior itself is practicing morality
i’d say “holding it” is practicing morality for the dog
and ‘not holding it’ is practicing morality for the dog
the action taken doesn’t matter, once the moral itself has actually been learned
please defend claim that all knowledge is not a moral issue (probably easy, i’d just like an example)
in my opinion:
-a moral agent is an agent that has a moral (only one is necessary to sate ‘moral’ attribute)
-awareness of morality or any specific morals is not necessary for an agent to be a moral agent (see dog, 6 year old)
-the frequency with which the ‘good’ intention/outcome is actually practiced does not matter (for instance, if i contemplated not stealing because its wrong, but regularly steal anyway, this is still an example of a moral agent)