What a seriously cool essay.
Innit? I'm all excited to read more about Machiavelli!
I assume that his self-consistent solution is consequentialism from top to bottom.
Well, -ish. I haven't thought much about different ways of hybridizing the three families, and wouldn't be surprised if there was something interesting there I'm overlooking.
Basically, I haven't been able to think of a way to do an intercomparison of ethics within a deontological or virtue ethics framework that doesn't end up begging the question, but I'd be delighted to be proven wrong, because that promises to be really interesting.
It's definitely possible to make the case that the different forms of ethics are each good within different smaller contexts. Etiquette can be a kind of deontology, although maybe one that gets its imprimatur from a higher consequentialism (standards of behavior make it easier for people to coexist without awkwardness or confusion).
Some flavor of consequentialism. "Self-consistent" may not be the best term to express what I mean; certainly I don't mean to imply that deontologies and virtue ethics are not internally consistent or anything like that. I'm just thinking about how to select an ethical system in a way that's consistent with the system itself.
I don't view uncertainty as all that big a problem. It complicates things, sure, but it's also something we deal with every day of our lives. Life is uncertain, and therefore sometimes your judgment will be wrong. Choose accordingly.
I think you're selling virtue a little short. If you think kindness is a virtue, then it prompts you to act in a way that is kind. There are plenty of situations where that's useful guidance. Right?
Ah! I would say the goal of an ethics system is to resolve questions of right and wrong. Particularly contentious questions where there is disagreement and edge cases where the answer is not obvious.
I feel like I can get reasonable confidence on consequences in pretty short order, but maybe my brain is overclocked.
I figure for time-critical decision-making, we mostly make judgments in advance and cache them as rules / virtues / principles / heuristics / whatever to be invoked on the fly. But if you have to make complex ethical judgments quickly, I think they'll frequently end up wrong no matter what system you're using, and there's probably no way around that.
Nice observations. I definitely think you're right that one of the reasons consequentialism is so compelling in the modern age is that we are always trying to weigh the interests or concerns of diverse groups and external consequences are the easiest (though not always easy) agreeable standard.
I'm going to read the whole linked essay later, but I wanted to say that I'm not sure how he describes virtue ethics is quite correct, at least by my understanding. I don't think virtue ethics is about intention. It's about character, in the sense that it posits traits that good people have. Unlike the others, it doesn't focus so much (or so directly, anyway) on what people DO, but what they are. An interesting effect of this for ethics is that actions are measured by any number of measuring sticks at the same time and there must be a fairly organic method of assessing things. For instance, in a given situation we might say someone comported themselves in a very honest way, but not a very compassionate one. There are no rules for weighing two virtues against each other, really. You just must apply each of them as best you can in any given situation. And because (according to the ancients anyway) virtues are habits, not structures of reasoning, they are applied differently and in different admixtures by different people and that's okay.
The problem with finding the "meta-ethics" for various systems is that ultimately you come down to some kind of unsupported axioms. It's harder than it sounds, I believe, to determine and agree on what human happiness is in consequentialism, for example. An interesting element of virtue ethics, I think, is that it doesn't really try to formulate a universal vision or outline every rule to follow. New virtues can essentially be discovered or revealed as culture and society change.
That's more my summary of virtue ethics than his; he's got some nice examples that are more consistent with your description of virtue than what I wrote here.
Agreeing on happiness is indeed tricky; I think it vastly simplifies the problem to decompose it into something like Maslowe's hierarchy of needs, rather than treating it as a unitary measure.
I like idea that when virtues come into conflict, the system just punts and says "do the best you can". That's a very mature way of dealing with it. My own leaning is to say that consequentialism is what you have to have underlying whatever system you use, but that virtues are a very useful tool for summarizing the results of a consequentialist analysis. They're macros, basically, and 99% of the time, they'll do the job. (Just remember to keep track of the assumed context for them...)
Mmm. I certainly agree that judging meta-ethics results in circular "reasoning" - logic is meaningless until you have *some* axioms to work with. But I think saying "the consequentialist way of judging is the one you'd use to decide between systems" is just saying "I am most comfortable with consequentialism." Sure, you *might* say "I will pick the system that results in the best consequences," but it seems equally plausible that you could say "I will pick the system that good people would pick" or "I will pick the system that my rules for judging rules likes."
Personally, I think we actually pick by having some results that we are willing to say "this is good" and "this is bad" about before we start reasoning about ethics, and we pick a system that results in most of our pre-made good/bad judgments being supported. After all, if you don't have *some* idea what the words mean, you can pick any system you want, and it doesn't matter. My complicated system for telling the difference between Blurgle and Farb can be whatever I want, and no one will care.
Ooo, very good counter-examples! I'm glad I used the word "seems" in the last paragraph.
I think you're definitely right about what we actually do. I want to say that this bootstrapping from intuition maps in a fairly clean way to a consequentialist analysis with axioms based on human psychology. I want to, but I won't, because I think it'll take a fair bit of thought to determine whether (or rather, how much) that's actually true.
In the counseling profession, we are constantly dealing with dueling ethical systems and the ambiguity of knowing what is right, and doing what is required by the Code that guides our practice. I think debating and discussing Ethics is interesting (it was one of my favorite classes), but overly philosophically masturbatory when looking at real-life situations. I use an ethical decision making model that includes checking my own value system, consulting with others, reading the Code, looking for precedence, and trusting my gut. Sometimes the outcome is good. Sometimes it involves calling the cops.
Cross-checking is good!
I guess I think about this stuff because when I was coming out, I found myself with a conflict between the rules of Mormonism and the values of myself and other people I cared about, and I had to reason myself to a resolution. And I feel like it would be good to develop that experience into something of general use, rather than everyone having to make it up on their own...
We actually undergo a similar process, especially when one considers the ethical code of counseling, which is firmly nondiscriminatory, and religious views of different counselors. I am in a position of privilege because I have a code of principle that makes religion moot, so my struggle is to understand and empathize with those counselors who feel this conflict in the first place, especially in the case of sexual or gender identity.
Consequentialism doesn't really support intercomparison, either.
That is, a consequentialist moral system says that the moral value of an act derives from the expected change in X from that act, where X is whatever we value.
But what do we value?
Reducing suffering? Increasing joy? Increasing choice? Increasing distinct lives? Increasing total tonnage of life? Increasing amount of blue things? Increasing God's appreciation for humanity? Some combination?
Different values => different consequentialist moral systems, and they're incommensurable.
So if we want intercomparability, we need to ask how to compare values.
Well, either that, or hope that really deep down all life forms that matter value the same thing. (I find that unlikely, but I know people who believe it.)
Edited at 2012-08-29 04:18 am (UTC)
This is a good and interesting comment, and I have some responses half-formed, and now I have run out of brain-availability to continue the conversation this week. Boo! But you have made me think. Thank you.