I thought I had a neat idea there with "let's abolish morality"

but it turns out that what I really mean is basically “I like consequentialism, not deontology.”  (I’ve read about one whole paragraph of each of those articles.)

But argh!  Life is so much harder when you look at everything as good or bad!

(skippable elaboration: external morality is just a heuristic, right?  it’s like when a kid says “why shouldn’t you shoplift a candy bar?”, you could say
- “well, then the store owner is out $1, and then it makes it harder for him to keep the store in business, and he might have to fire his workers, then they’re out of jobs so they can’t buy things, so other stores go out of business; it’s got negative economic effects” or
- “if I steal, other people think it might be okay to steal, and store owners will get suspicious, and this creates a world of lies which I don’t want to live in” and then explain the tragedy of the commons or
- “I might get caught and the chance of me getting caught times the badness of getting caught is more than $1”, and explain how expected value works or
- “stealing is wrong.”
In the adult case, it’s even worse, because you don’t even know the effects.  In either case, appeal to external morality is the quickest way to figure out “should I do X or Y?”.  It’s often a necessary hack, but it’s still a hack.  And when you forget that it’s just a hack, you start thinking “he loaded the dishwasher WRONG” or “she said the WRONG thing” and when people wrong you that really hurts!  But you’re not really wronged in any grand true way, and because it’s so minor, it’s better to think about it as if you weren’t even wronged.)

Urgh!  Deontologists stress me out!  Then I am reminded of a bit in “The Size of the World” by Jeff Greenwald in which his friend Sally comes to a great realization that, though she tries to be accepting of everything, she doesn’t accept it when others don’t accept things.  It’s played for a bit of laughs in the book, but it’s a lesson I’d do well to learn.


Unknown -

Yeah, Kant’s “categorical imperative” always sounded like hogwash to me. What do you think about option 3, this “virtue ethics” thing? It looks to align most closely with the heuristic I tend to use: an action is good if it was well-intentioned. Allowances need to be made, of course, for the actor’s cleverness and rationality, but it seems to work in most cases for me.

Oh, also, I do love the “accepting everyone except for those who aren’t accepting” thing. The hard part, I think, is the realization that accepting something doesn’t mean I can’t help “fix” it or whatever. Maybe one day I’ll get it too :)

Dan -

Virtue ethics: sounds good to me too. It might be the sorta “computable” ethical system that makes sense: just be virtuous, and don’t worry about whether each action is good. If you’re virtuous, the things you do will be good, both by definition and by just “common sense”, which is a nice reality check. I mean, consequentialism is nice, but you can never know the real results of something, and most of the time you can’t even know anything close to the real results of it. If I eat this apple, is that good or bad? Consequentialism, then, is like the “super Turing machine” that we use in arguments about computability and the arithmetic hierarchy and stuff, while virtue ethics is an actual Lenovo laptop (or Cr-48!) that you can use in your real life. Maybe?

Accepting: Right, but also, accepting something doesn’t mean you CAN help fix it either. (I tend to err in the direction of “trying to fix things.")

Unknown -

Sure. The Serenity Prayer is the most awesomest thing I’ve ever heard… but still leaves the tiniest bit to be desired, since it makes it sound like you cannot “accept” the things you can/intend to change. I imagine the real peace comes with having a definition of “acceptance” that is orthogonal to “changeability.”

Thinking again on consequentialism vs. virtue ethics: it occurs to me that my heuristic for “being virtuous” involves trying to compute the end results of my actions. It’s not that I think they’re accurate – but I often have no better metric than “doing the most net good.” In most cases this works out much like my (patzer) chess games – I stop calculating a branch once I’ve, say, taken the opposing queen (and see no recapture on the horizon).

Dan -

I agree. I guess if you were awesome, you’d be able to accept anything. And then you’d return to your basic mental state, and then independently you’d calculate “should I change something?” and deal with that. That’d be nice, eh?

blog 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010