Friday, December 19, 2008

Zeno's Paradoxes

Paremenides was a philosopher who rejected change and motion as illusory (I've mentioned him before). One of his students was Zeno, who set up a series of paradoxes (some 40) to show that the concept of motion was absurd. Most of them ultimately stem from the problem of the infinitely divisibility of finite spans of space and time. Nonetheless, they present perplexing problems, which certainly weren't soluble with Greek mathematics, and even strain modern mathematical theory (George Cantor notwithstanding).

The two that are the most characteristic are the arrow paradox and Achilles and the tortoise. The arrow paradox first defines a thing at rest as something that occupies its own space, then says that when you throw an arrow through the air, then it always occupies its own space, therefore it must always be at rest. Why we should say that a thing that occupies its own space must be at rest and can't also be in motion is unclear. It might have something to do with the idea that if it occupies its own space and nothing more, then how can it continue to propel itself forward after it leaves the hand. This was a problem that vexed Aristotle, but which we can now can take for granted, leaving Zeno's arrow paradox not so vexing to us.

The Achilles and the Tortoise paradox is more interesting though. Achilles was reputed to be a fast runner. If Achilles is racing with a tortoise and he gives him a head start, then when Achilles starts running he will catch up to where the tortoise was when he first started running, but since time will have elapsed the tortoise will no longer be there, so he'll run to where the tortoise is now, but he won't be there by the time he reaches that spot, and so on. Thus, Achilles would never reach the tortoise. We should remind ourselves of the ultimate purpose of this argument, which is to show that motion is absurd; therefore, nothing is in motion, there is no change.

Aristotle's response to this paradox, is that it is irrelevant, since we can clearly see that things move. But Aristotle's refutation wouldn't convince Zeno since he and Parmenides both realize that we perceive motion, but they think that the perceived motion is an illusion. George Cantor, will also come along with a resolution to this with new mathematics of infinite sets and actual infinities.

I think, though, even without this sophisticated mechanism we can already see that it is problematic. The problem with Achilles and the tortoise is that, if the paradox is valid, then we should actually see Achilles being unable to catch up with the tortoise. The way the paradox is set up, it should affect both real and apparent motion. And yet we have many times seen faster runners catch up to faster and pass by them. We shouldn't even be able to see this occur. And yet we do see it occur.

In other words, Zeno's paradoxes prove too much. They prove both the absurdity of real and apparent motion. Whereas we don't observe any problem with apparent motion. This shows us that there is a problem with Zeno's paradoxes, though it doesn't show us what that problem is. It is up to later and more sophisticated mathematics to point out those problems.

Saturday, December 13, 2008

Freedom and the Ability to do Otherwise

Theological absolutism can frequently lead to certain contradictions. For example, it is supposed that God is omnipotent and all powerful and yet that humans have free will. If God is all powerful then God must control everything, including my actions. But if God controls my actions, then I can 't have free will. One way to dodge the problem is to speak of potential omnipotence. In other words, God could control me, but God permits me to have free will.

Yet another solution is to approach the problem by dividing up the concept of freedom. Freedom is proposed to have two parts: autonomy and choice. Autonomy just means that I do my actions myself. I am not a puppet in any sense. For example, the somnambulist Cesare in The Cabinet of Dr. Caligari has no autonomy since he is controlled by Caligari, or in Being John Malkovich, John Malkovich loses his autonomy when his body is taken over by Craig Schwartz the puppeteer.

Though I think none of use would deny that autonomy is necessary for freedom, I think we assume there is something more. We usually also want to include choice, namely having multiple options. To be unambiguous this is sometimes called the ability to do otherwise. Now, I don't really care to quibble over terms and debate whether "freedom" means just autonomy or means the ability to do otherwise plus autonomy. When this is put in a metaphysical-theological context to try to guarantee free will, it doesn't really work. How can I be blamed for doing any action when I could do nothing else?

Descrates uses this distinction too in his discussions of freedom. He becomes convinced that reason leads to good action, and he even goes so far as to say that reason compels good action. Once you see the truth of something (with clear and distinct perceptions) you can't help but believe it, and once you see the goodness of something, you can't help but do it. But, being a faithful Christian, he can't deny free will. So, he has to demarcate this separation between choice and autonomy. He then claims that autonomy is sufficient for free will. But how can you possibly permit responsibility in the full Christian sense and still embrace this idea of the compulsion of reason?

Sunday, December 7, 2008

Market Bubbles and Tinkering

There's been a lot of talk these days about the psychology of speculative bubbles, which is not always entirely useful for helping us understand the situation. Most of these theorists provide little insight, since they don't explain why they happen when they do. If bubbles are simply caused by human psychology we should expect them all the time, since human psychology doesn't change, and yet we don't see this in reality. Despite these shortcomings, I think there are useful insights to be found.

The leads to an article from this month's The Atlantic about how market bubbles are inevitable. I've seen other articles like this one, but this one is particularly useful because it gives us some details about a number of different experiments in this area and their results. It's important to see direct results of the experiments themselves, since it's often the experimental results, and not the experimenters' conclusions that provide the most insight.

The experimenters created a very simply low-stakes financial market where volunteers can trade stocks and will receive dividends. These experiments show that in a very simple and brief trading market, volunteers will tend to create financial bubbles, which will duly crash, resulting in lower profits for everyone involved. To reduce the problem of poor information, the subjects are given constantly updating information about expected dividends (dividend payments vary randomly, but will average 24 cents). Thus, it seems to show that people are irrational, since they would have been better off if they hadn't tried to profit by speculation, thereby creating the bubble.

Admittedly, there are many are many artificial features of this market which don't seem to replicate reality. The amount of money is low (one can only expect to earn a few dollars), so there is little at stake for the subjects. The experiment is short, only lasting an hour. There is an artificial end to the trading, which can cause rash trading just before the close. And subjects aren't allowed to talk to each other, restricting the flow of information more than in a real market.

But as it turns out, one of the biggest artificial features is that the experiments are all and equally inexperienced. What happens when the same group repeats the whole experiment once more with the same market conditions? Well, it turns out, on the second try, they anticipate the bubble and then try to ride it up some of the way and then cash out before it bursts. This causes the bubble to happen sooner and be smaller, and be followed by a long period of stability. When they repeat it again, the same trend continues, with the bubble happening even sooner and being even smaller. Replicate the experiment enough times and you'll only have tiny bubbles and tiny pops, probably happening multiple times in the one hour session. But, no one would call these bubbles anymore. This would just be normal market fluctuation as a rational model would predict.

Behavioral economists sometimes fail to to distinguish bad decision making due to irrationality and bad decisions made due to poor information. It's not surprising that people don't maximize self-interest when they rely on limited and sometimes incorrect information to make their decisions. These experimenters had tried to reduce the effects of limited information by giving accurate and constantly updated calculations of expected dividends. But the subjects still had little to no information of how markets behave, and most importantly of how this market would behave over the course of the one hour. This experimental market has many unusual features and so knowledge of stock markets or futures markets might not translate into an ability to profit from this tiny experimental market. But real investors in the real economy do know how various markets behave and they tend to stick with markets that they understand. Thus, behavior of the subjects after a few tries, unsurprisingly best reflects the behavior of real markets.

But there's another little detail that experimenters further revealed. When they put subjects experienced in one set of parameters through the experiment one more time, but this time with a different set of parameter, this led to bubbles all over again. The bubbles weren't quite as bad as subjects doing it for the first time, but the subjects did overall perform more poorly when they had readjust to a new market system.

Thus, the conclusion seems simple: investors inexperienced in the behavior of a particular market will tend to create bubbles. Thus, when federal regulators constantly try to tinker with the market, they force investors to constantly relearn the game. This includes the actions of the Federal Reserve, the Treasury and the lawmakers. They are all constantly creating new parameter of which investors have little or no exact experience. In addition, these new rules and decisions don't come out of the blue, as they do in the experiment. Investors try to anticipate the next move of the Fed or Congress and respond in advance, which further complicates the markets and makes it more unpredictable for investors, since now they're trying to second guess the regulators and second guess each other's second guessing.

These insights are consonant with the Austrian theory of the business cycle, which better explains why the bubbles start to begin with when they do and how they get started. The Austrians explain that credit expansion leads to the appearance of prosperity, which distorts usual economic signals, triggering excessive, unsustainable investment. With the insights from these experiments, I think we can see another factor contributing to how bubbles get out of control. Cheap credit and the resultant boom create novel conditions which experienced investors struggle to adapt to and the cheap credit and apparent abundant prosperity can attract inexperienced investors, who are prone to tulipmania and herding. Add to this ideas like Robert Higgs' concept of regime uncertainty and you can see how intervention can be, by nature, whether well-considered or ill-considered, counter-productive.

Politicians frequently characterize the economic system as a car that needs to be tinkered with, so that it runs more smoothly, or needs to be jump started when it slows down. But the economy is much more like a living breathing thing, filled with sentient, dynamic cogs that adapt to the conditions around them. Politicians are not tinkering, but performing surgery on the economy. Politicians need to learn to set down the scalpel long enough to let the wounds from their last intervention heal before they start to assume that the economy is dying because it is inherently flawed.

Thus, the simple policy recommendations to our lawmakers and regulators would be: do nothing, preferably even try to scale back on what you're already doing. The economy needs to heal.

Monday, November 24, 2008

Does capitalism need cheap labor?

I heard recently from someone that one of the things wrong with capitalism is that it depends upon cheap labor. They were arguing for some form of democratic socialism with the workers owning the company they work for. I won't talk about that, simply focusing upon this idea that capitalism depends on cheap labor.

On the face of it, it is in a sense true. But we need to remember what "cheap" means. It is a relative term, so just as there will always be those who are less wealthy, and thus there will always be a group of relative poor, so there will always be people with lower skills and qualifications who are to be considered "cheap labor." But the person in question was making more of a claim than this. The idea is not just that the labor capitalism requires is relatively cheap, but is actually at what is indisputably a very low wage, barely enough to live off of. The evidence for this, is the way that, in wealthy countries, as their workforce becomes more highly paid, jobs are moved to cheaper locations, and cheap factories are opened that charge "slave wages." Of course, the people aren't slaves, voluntary taking the job and generally making enough to subsist off of, and with a consistency of income that makes it considerably more attractive than other options (considering how bad the conditions in a sweat shop are, I leave it to you to imagine these other options). But, to be fair, their wages are far from enviable, and do not give them a attractive livelihood. The question is, does capitalism depend on this cheap labor?

Lets look at a similar case. I have a job and need to keep myself clean and presentable, which involves daily bathing. In my apartment I have a shower and a bath. I use the shower, but do I need the shower? I could bathe solely using the bath, but I prefer the shower because it's quicker, more convenient, more water conservative, etc. I use the shower because its there, not because I need it. Similarly, the companies that use cheap labor, may not need the cheap labor, but rather simply use it because it's there. One fact that suggests that capitalism doesn't need cheap labor, is that companies are not drawn to the places with the cheapest labor, since these would probably be in some of the poorest African nations. These countries are too risky: unstable government, corruption, and weak laws and law enforcement. They seek out places that are stable and reliable that also have cheap labor, but certainly not the cheapest labor they can find.

Even more damaging to this position, is a recognition of the harm that cheap labor can do. It can be seen in the Roman empire, who had ample amounts of cheap labor in the form of slaves. But their slaves discouraged innovation. If you want to develop a new form of automation that would require less man hours, this requires time, investment, resources--in short, long-term risk. But if you've got slaves to give you all the man hours you need, why delve into the risks of innovation? The same goes with cheap labor nowadays. If the labor was expensive, the relative advantage of developing new technology to do it faster with fewer man hours would be much higher, so there would be faster increases in productivity and overall wealth. If there are abundant supplies of cheap labor its not worth it to innovate as much. Cheap labor creates a disincentive to innovate. Capitalism admittedly doesn't need innovation any more than it needs cheap labor, but in the long run it is a key source of wealth for all, and something capitalism is particularly good at encouraging and taking advantage of.

Of course, in the long run, free trade will tend to increase the wealth of these cheap laborers and essentially force greater innovation and less reliance on man power. But of course, that's the long run--years, in fact, generations into the future. Most critiques of capitalism along these lines seldom consider the long run and tend to seek out short-sighted politically exigent solutions. And that is something that capitalism definitely does not need.

Tuesday, November 18, 2008

Do the wicked prosper?

Our moral sensibilities can be sometimes rattled by the thought that wicked people prosper. We may even express a nihilistic execration that things are not right with the world, that there is something fundamentally wrong with the way things are. But do the wicked prosper? I don't mean to ask, do any wicked people ever prosper, since there are numerous examples of wicked persons prospering, but rather do we see a general pattern of wicked people frequently prospering.

I think the appeal of this idea is clear. I remember reading that there was a turn in pre-Christian Jewish theology, when a philosophy of apocalypticism rose to prominence. The problem faced was, how could a just God permit the world to be so bad? The answer was: we're only in a transition period before the apocalypse. Evil powers now rule, but after the apocalypse, good will reassume power and things will be hunky dory. That's apocalypticism in a nutshell, or at least one version of it. This leads to the assumption, that if evil powers are ruling, anyone who is in power is therefore evil. And it also can lead to the inverse assumption, that if you aren't successful, it's because you're a good person, a good concession prize if your life is constant struggle and you're barely subsisting and enduring trying conditions. Correlatively, if we comfort ourselves by saying that we aren't as successful as we'd like to be, because we're not willing to compromise our values. Those who are successful have compromised their values and are thus bad people. In other words, it's a good way to deal with envy of the successful, by denigrating them and lifting oneself up higher.

Still, I want to know, do the wicked prosper? Now, when I think wicked, two models come to mind: terrible murderous political or military leaders and serial killers. If we focus on serial killers first, we notice that they generally are not models of prosperity. Go through the rolls of known serial killers and you will find lots of people with low pay, low skills job, who are poorly educated and not terribly well off. There are admittedly famous aristocratic serial killers, like Elizabeth Báthory and Gilles de Rais, but these aren't models of success, rather examples of people born into wealth. There are exceptions of course. H H Holmes comes to mind, a man who lied, cheated and stole considerable wealth in his short life, in addition to killing a whole bunch of people. Of course, that he died at 35 should remind us that this is a risky line of work and not one we should pursue if we want to retire. Admittedly, not all serial killers are low class, but even some examples of doctors who prey on their patients, don't really strike us as models of success. I think the reason is simple: their consuming desire to gratify this murderous appetite eclipses the possibility of success. One theory of the identity of Jack the Ripper is that it is Walter Sickert, the British Impressionist painter, but to imagine that one could maintain a career as a skilled painter and still have time for multiple killings without being caught, seems like a lot to do (admittedly there are many other more serious problems with the Sicker-Ripper theory, but I leave that to others).

If we now focus on evil leaders, they of course are successful. By definition, as people who have attained to a prominent enough position to lead many other people, they are successful. The question is, are they successful because they are wicked, or rather wicked because they are successful? Power is known to corrupt--it provides temptations, and provide the freedom to indulge in desires that may have previously been barred. Thus, many wicked leaders may simply become wicked by their success. The wickedness doesn't help them succeed. And again, so far as they indulge their desires, it again must be a hindrance to further success.

And I think we also tend to overestimate the wickedness of evil leaders. Hitler may have killed millions, but a) he never did it himself and b) he did it out of a desire to actualize certain ideals for Germany, to contribute to its greatness. To accomplish something as terrible as the Holocaust requires a great many wicked people, of which Hitler is only one. Yet, I find H. H. Holmes, to be a far more wicked person. He killed people himself, he enjoyed it, took pleasure in it, liked to make people suffer, and he did it for no high ideal, but merely for the pleasure of it.

The qualities that make a person successful are things like determination, commitment, cleverness, resilience, self-control, interpersonal skills. Insofar as one's wickedness may undermine relations with others, insofar as one's wickedness involves indulging in twisted desires, and insofar as one's wickedness puts one on the wrong side of law, it certain must be more of a hindrance than an asset. Admittedly, there are wicked people that do prosper, which may still upset us, but it is more the exception than the rule.

Thursday, November 13, 2008

Curious lessons learned from numbers

To talk about something different this post I'm going to reflect on the meaning of some numbers from my younger days. The first incident is from high school. There were three students in my graduating class who essentially tied for valedictorian. They'd all three gotten straight As in every class and they'd taken six AP courses. My school was on a 4.0 system (A=4, B=3, etc) and AP classes were weighted so that in AP classes were A=5, B=4, etc. If you get 4 for all you're other classes (at least 158 credits), except 5 for your 30 AP credits (each AP class was worth 5 credits) then you're GPA will be about a 4.16. And they should all three tie for first, which they almost did, but Mike came out slightly ahead, Matt came in second and Kirby third. Here's the difference. Mike, the valedictorian, when he was a senior, he was taking four AP classes (that's 20 credits) but all seniors must take 22-24 credits. So, he could take a three credit graded class to give him 23 credits. But a better idea would be to take an easy two credits. He did two student assistant credits which basically amounted to being the personal assistant of a teacher. The class was Pass/Fail. He would go spend two hours a week doing basic office chores, no homework, no studying, and as long as he did what he was supposed to, he would get a Pass. Taking two easy credits turned out to be an auspicious choice. Pass/Fail courses aren't counted towards GPA, and he, in fact, took a number of Pass/Fail courses like this, in order to minimize his work load. Now, let's think about this. If your GPA is about 4.16 then every A in a non-AP class actually lowers your GPA. Thus, if he were to take a regular graded classes then that would've hurt his GPA, even if he got the best grade he possibly could. Mike was good with numbers and he actually set down and calculated the minimum number of credits necessary to graduate (turns out that if you took the minimum number of credits each semester 26 for Freshman, 24 for Sophomores, 22 for Juniors & Seniors you would achieve the minimum number of credits necessary to graduate, which is 188), and knew the maximum number of Pass/Fail classes a student could take and still graduate. In order to ease his workload (which admittedly is a lot, since he is taking 6 AP courses) he pushed towards the minimums. But Kirby, who came in third, was basically the hardest working, took the most credit classes and got the most As, and probably should have deserved to be valedictorian. But the numbers said she came in third. I guess things like this don't matter in the long run, but it is curious. I remember I was in Calculus with Mike, and he, Brian and I would complain about overachievers. The problem with overachievers is they raise the standards. They make it harder for the rest of us. The ideal is not to be an overachiever, but to do just the right amount of work, just enough to get an A. "Nothing to excess" as the ancient Greek maxim says. I never thought such a philosophy would pay off.

The second incident goes even further back, to my middle school days. In Social Studies we learned about the stock market, and, as a bonus, there was some sort of national competition that we could participate in. It's for students, to see how well they would do at picking stocks. The competition basically gives you a set amount of money; of course, they wouldn't literally give us money, but, you know, pretend to give us money. And then we buy shares of stock with that money, and after a set amount of time, whoever has made the biggest profit wins. Now, there're are many good strategies for investing, but generally over the long run the best strategy is to invest in large, stable, reliable companies that will produce stable, reliable long-term gains--namely, blue chips. This is the strategy I followed (per my father's advice) and it's sensible investing, but not for this game. There's no real money being made. The prize is only for the winner. It's not a good idea to play the odds and settle for modest profits, since you don't get to take the profits home with you. The best choice is to buy very risky stocks, buy a whole lot of shares of low-priced start-up companies. You'll make a lot of money with only small increases in price. Of course, you're taking high risks, and genuinely risk losing large amounts of money, but, as I said, this is only pretend money. There's no penalty for losing money. You have nothing to lose by taking high risks in this system Of course if you were to take the same risks in the real world, the losses would be real. But in this system, the gains only come about by winning the most, and you have nothing to lose by taking big risks. I didn't figure this out at the time of course, only much later. But maybe I'll have a kid someday and I can teach him how to game the system. I guess the lesson this contest teaches, whether it wants to or not, is that when there's nothing to lose, it's sometimes best to take big risks. Or at least that's what the smart ones 'll figure out. And what will they become when they grow up.

Sunday, November 9, 2008

Is the mind material

In De Trinitate (On the Trinity) Augustine tries to refute to position held by some of his contemporaries, that the mind is just another physical body, like air or fire or some other physical thing (Book X, ch 10, paragraph 15-16). That the mind is material is an argument that had been held previously by the Greek atomists Democritus, Epicurus & presumably Leucippus. The popularity of Epicureanism in Rome makes it unsurprising that there would be contemporaries of Augustine that would avow such beliefs. But for Augustine, who wanted to argue for a non-material part of the self, in particular to justify a soul that is imperishable and can ascend into heaven after death, this was unacceptable.

In Augustine's argument, he first asserts 1) the mind is known to itself. But then he notes 2) to truly know a thing is to know the substance of a thing. 3) Thus, we can say that the mind knows the substance of itself. In addition, 4) to know about a thing is to be certain about it. 5) Thus, the mind knows its substance with certainty. We notice, 6) that the mind is not certain that it is a body like air or fire or any other body or even a property of a body. Thus, 7) since the mind knows its substance with certainty and is not certain that it is a body or a property of a body, then it can't be a body or a property of a body.

The big problem with this argument is the premise that the mind knows itself. In the vague sense of know that we usually use, this seems uncontroversial. The mind is that by which we know other things, thus it would seem that the thing which knows would automatically knows itself. But Augustine very precisely defines knowledge as to know the substance and to know with certainty. Thus, if we plug in this definition, when he is making his initial premise that the mind knows itself he is actually saying, "the mind knows its substance with certainty." Step 5, as I labeled it, is not a conclusion from premises, it's merely a restatement of the initial assumption using the given definitions. To say that the mind knows its own substance with certainty is a dubious claim, which I think most people would want to deny.

"Know" in the opening premise is used as a weasel word, used to present what seems like an uncontroversial premise. But then "know" is redefined to present a more radical premise, which Augustine needs in order to prove his final point.

Wednesday, November 5, 2008

More democratic

I believe that democracy is a better way to make decisions, even more than the framers of our constitution did. The process they set up is one of limited democracy, with only direct popular election of congressmen. Election of presidents is by the state (used to be many states didn't use direct popular election, but picked electors in their state legislator), and most other positions are appointed without any popular involvement. I'd like to see things more democratic. Perhaps the reason that we have such low turnout at elections (even in yesterday's historically high turnout) is because voters recognize how undemocratic it is. But making things more democratic is not so simple as increasing the number of things we vote on, it also matters how you vote and what voting does. So how could we make a system that is more democratic?

The first thing to get rid of is indirect election. The most unpopular for of indirect election is the electoral college. Put to a vote, it would probably promptly eliminated. Don Boudreaux at Cafe Hayek makes an interesting comments about the Electoral College, that it is inconsistent for people to complain about how the electoral college is not democratic enough and yet not complain about other ways in which we are disproportionately or indirectly represented, like the Senate. I, on the other hand, harbor no inconsistency. I'm not too worried that the electoral college disproportionately distributes votes or in indirect. My biggest problem is that, by requiring a candidate to win a majority of electors and not just a plurality, it pushes it towards a two-party system. More choices is definitely more democratic, which is an even better reason to get rid of the electoral college (admittedly,there are bigger barriers than just the electoral college to third parties). Even better would be to get rid of all indirect representation. Better to have the people vote on new laws and not congressmen. I think many would object that the average voter can't vote knowledgeably or intelligently vote on bills, being not as well informed as professional lawmakers. But I think people completely overestimate the congressmen's knowledge of the bills they vote into law: they don't read most of them, they don't understand the repercussions of most of them, they sort of know the basic gist of the laws, but they primarily take into account which bills will best cultivate the image they want to present for their voters and thus are most likely to secure reelection (or which will best favor special interests who might benefit their campaign). At least direct voting would only permit laws that the people wanted, and it would certainly radically slow down the rate of buildup of the arteriosclerosis of excessive law. The big losers would be the lawyers.

Of course, the down side of this, is that people do make bad decision and we're sort of stuck with bad laws once we naively vote them in place. A good way to improve this would be to permit changeable votes. If I voted for a law or an official but changed my mind, I can withdraw my vote, and if they lose enough votes, then they're eliminated. Vice versa, I could also switch my nea to a yay, to bolster a law or official I had opposed. We could even somehow integrate the influx of newly registered voters, so that they could vote for or against laws that have already been passed, and thus add support to a law or increase the likelihood of its rejection. That would make it much more democratic.

Robert Lawson at Division of Labor also makes another comment about what he doesn't like about presidential voting, that, if you're in the minority, you're forced to go along with the majority. He contrasts this to a group of people voting on which restaurant to go to: this is better because you've always got the option of opting out if you really don't like the results. In the presidential election you can't opt out if the outcome isn't to your favor. I think this issue could be addressed. With elected officials, the simplest way would be a form of power-sharing. Let's say Obama gets 51% of the popular vote. He would thereby get 51% of the power, and maybe McCain would get 47% of the power, and the other 2% would go to various minor candidates. Dividing power wouldn't be too difficult for many duties. And, if in addition, votes were also changeable, and people who didn't vote had the option of adding their vote later, the power of officials could vary over time. How you might do the same thing with laws that are voted on by the people, I don't know. Maybe you'd have to simply settle with making most of them all or none, for lack of a better solution.

One of my favorite parts of The Moon is a Harsh Mistress by Robert Heinlein, is when the characters, at the beginning of a lunar revolt for lunar sovereignty, are thinking about how to form a new government in ways that the American Founding Fathers didn't. It's interesting to think about. I would make it far more democratic. I don't know if I'd go to an absolute extreme of direct democracy on everything, like people voting on Supreme Court cases. In this day and age, with the technology for voting, the possibility of actually putting into place such measures, even in a county as vast and populous as ours, is possible, and we could very well go even further than the direct democracy of the ancient Athenians. Admittedly, it would have many flaws, but the real questions is, "is it better in the aggregate?" I also don't take into consideration certain questions, like "is it more democratic to have no government at all?" or "should I participate in such an undemocratic voting procedure as it stands" or "is the free market more democratic than this elective, legal process?" Maybe another time.

Saturday, November 1, 2008

Duties in Kant continued

I talked about Kant's perfect and imperfect duties and the determination of duties based on evaluating the consequences of making a maxim into a universal law in my last post. I want to look more at the determination of duties again. John Stuart Mill's critique of Kant in "Utilitarianism" was simply that Kant's rules couldn't give us any reliable criteria for deciding whether even the most obvious injustices were immoral or not. We could always justify them on the grounds that if we universalized them, it would lead to no conceptual inconsistency. As I said before, the criteria for determining whether something was a duty is to make it universal and see the consequences--namely see what would happen if everyone followed it as a universal rule. I already pointed out an obvious problem case, murder. If I want to evaluate the maxim, "I should kill anyone who interferes with my desires," all I have to do is imagine what would happen if everyone followed this maxim all the time. What would happen is a lot of people would be killing one another, a skyrocketing death rate. People would take greater precautions to avoid death, but nonetheless economic activity would grind to a halt, life would be shorter and far riskier, survival would be tough, and so on. Maybe Kant could say that there would be a conceptual inconsistency between risk and safety, but this doesn't work, since that conceptual inconsistency is inevitable no matter what. I can still die in many accidental ways that would be nobodies fault even if no one every killed anyone, as well as from all the normal diseases and such that kill most people. In short, there's no conceptual inconsistency. Thus, I can't say that there's anything wrong, according to Kant's method, with the maxim, "I should kill anyone who interferes with my desires."

It appears as if the only case in which his method actually works is the lying promise, but, as I suggested in my last post, a lying promise is already conceptually inconsistent. Lying and promise are opposites and thus already conflict. You don't need to universalize it to bring out the inconsistency. And vice versa, if there is no inconsistency with the singular maxim (for example, "I should kill anyone who interferes with my desires"), then there'll be no inconsistency resulting from the consequences of universalizing it as a universal law.

Even looking at the imperfect duties, if I take many obviously immoral acts and try to justify them with Kant's method, I find no problems. Like eugenics for example. If I decide to test the maxim, "I should sterilize all substandard individuals," and then universalize it, I see no problem. It doesn't seem to lead to any empirical inconsistencies. Maybe the criteria of "substandard" is arbitrary, but I could surely use more precise criteria, like IQ, criminal record, BMI, whatever. Heck, if I took it one step further, and made the maxim, "I should eliminate persons of inferior races," then again there is no inconsistency and I could justify this with Kant's ethics.

Which of the Nazi war criminals was it who said he'd always lived according to Kant's Categorical Imperative? I guess Mill was prescient in recognizing that Kant's ethics could be used to justify even the worst injustices. Admittedly, the categorical imperative of "treat others as an end and never simply as a means to an end," actually is useful, but can his method even justify this maxim (he claims it can), and is there anything else useful in his ethics?

Tuesday, October 28, 2008

Perfect Duties in Kant

In the Groundwork of the Metaphysics of Morals Kant presents a hard and fast standard for how to determine whether something is a duty. There are duties which apply in all cases, "perfect duties," like for example never lie. Then there are duties which one should follow unswervingly, but one can choose when to apply, "imperfect duties" like cultivating one's talents. The measure of a perfect duty is that when you make it into a universal law, it leads to a conceptual inconsistency, leading you to conclude that you should never do it. The measure of an imperfect duty is that if you make it into a universal it would somehow lead to an empirical inconsistency, in that it violates your natural instincts. I'll explain the empirical inconsistency first, since it's a bit more unclear. We'll use his example, cultivating your talents (4:423). Kant basically says that you couldn't possibly spend your whole life not cultivating your talents since it would violate your natural instincts. It seems like, if you were to try to do this, something deep inside of you would scream that it's wrong or that it's undesirable or substandard for a human or something like that. Though I think Kant has greater confidence in the rationality of our usual moral instincts and he and I probably would disagree on the details of what things are imperfect duties, I nonetheless think this seems like a sound idea. There're certain habits that just seem less admirable or worthy of respect and we should try to do their opposite when we can--like cultivate our talents, help others, share, learn, create, etc.

The one that seems to me most crazy is the perfect duty and the conceptual inconsistency. The example he uses is the lying promise (4:422). If I'm tempted to make a promise I know I can't fulfill (like borrow money I can't pay back) should I make such a promise? Kant says that we have a perfect duty not to make a lying promise because if we were to make it a universal law that all people should always make lying promises, then no one would be able to promise anything because people would know it always to be a lie. We would have a conceptual inconsistency of lie and promise, which shows us that one should never make a lying promise.

The first limitation one should notice, is perfect duties can only be "thou shalt nots." The conceptual inconsistency is the measure of a perfect duty, but it only shows when something should never be done. The next thing is that the conceptual inconsistency doesn't work well for many things that we would assume to be perfect duties, like don't kill for example. If I were to make it a universal maxim that I should kill anyone I don't like, then there would be a lot of people getting killed. But where's the conceptual inconsistency? Also, why do you even need to universalize it to see the inconsistency? Isn't a lying promise already inconsistent? The other thing is that, this rubric of determining what is a perfect duty is based on the seeing the consequences of universalizing a rule. But consequences are uncertain. This is one of the reasons why Kant strays away from consequentialism, why the morality of an act is based on the act itself, not its consequences. Also, the consequences that Kant envisions as the result of universalizing these maxims seem rather superficial and static. If we were to universalize the rule that people should make lying promises whenever it is convenient, then wouldn't people try to find ways to make contracts without having to rely on people's unreliable promises? Sounds something like the world we live in, doesn't it? Don't people already avoid breaking many promises for selfish reasons? Do people keep promises with their friends out of duty, or because they don't like to take advantage of their friends and don't want to lose their friends? Do people avoid violating business contracts out of duty, or because they want to make future contracts with the same people or because they're worried about the legal repercussions of violating a contract? We already know that promises can be unreliable, so we invent techniques and create institutions to force people to keep their promises. The problem is far more often reneging on promises rather than making them falsely to begin with. People are creative and dynamic, and Kant's consequences of universalizing maxim assume people are static and simple.

The ground of the perfect duties seems quite flimsy. Maybe one could accept this imperfect duty, and the almost inevitable subjectivity of it, but to think that reason gives us access to universal rules is foolish.

Friday, October 24, 2008

Love desires the good

In Plato's Symposium, after a series of speeches praising love, Socrates comes along, and in dialogue with Agathon, demonstrates that love is neither beautiful nor good (200a-201c), which basically all the previous speeches had assumed. How does Socrates go about proving that love is not beautiful? Well, he first sets up the assumption that things desire what they lack. In other words, if I have a hunger for food, it's because I lack food in my stomach. Of if I desired friends, it would be because I don't have any friends (because philosophers are uncool). So, all desire is due to a lack. Well, we might question this and say, "don't people who have stuff desire to keep their stuff? Like a guy who has a car who desires to continue to have their car," or "don't people who have stuff sometimes desire to have more stuff? Like a guy who has money but wants more money." Well, Plato says, these both still express a lack. One is due to a future lack--the guy lacks a car in the future (since you can only have things in the present)--and the other is due to a lack in terms of shortage--the guy doesn't have enough money.

Socrates then defines love as a desire for beauty. I think if we define "beauty" broadly enough to include physical beauty as well as all of those other non-physical and intangible things we find attractive (like, perhaps, personality, sense of humor, kindness, intelligence, gobs of money, and so on) then I think most of us could accept this. True, we get other things out of love, like companionship or the feeling of being loved, but let's not worry about this. It seems like a good definition: Love desires what is beautiful. But if love desires the beautiful and things only desire what they lack, then love lacks beauty. And since beauty is a form of goodness, then love isn't good. That's a striking conclusion.

Oh, but not so fast Plato. Something sneaky is going on here. I think the technique Plato used here might be appropriately called a bait and switch. He has told us the desire entails a lack. In demonstrating this, he effectively responds to an obvious counter-example, that people can desire things they have. This, shows us that there is more than one type of lack: 1) where you simply don't have it and 2) where you have it but don't have it (either because you don't have enough or you lack it in the future). But then, when he turns around and speaks about love desiring beauty, he forgets about the second type of lack. In other words, it's possible that love desires beauty because love is beautiful but lacks future beauty or it's possible love is beautiful but lacks enough beauty. In fact, since things can only possess what they have in the present, then all things can have a future lack. Thus, anything could potentially desire what it has. I guess Plato could claim that since love is timeless then it doesn't make sense for it to lack things in the future, since time doesn't apply to it. Still we could say that love is beautiful but desires to be more beautiful. Many beautiful people desire to be more beautiful. It's not unheard of. He couldn't probably find a way out of this problem too (something about how as a timeless thing it doesn't admit of more or less beauty, or something like that) but now where just creating ad hoc assumptions to preserve a pre-established conclusion. That's not reasoning. Oh well, I guess love might be beautiful after all.

Sunday, October 19, 2008

Stentor's voice

In the Politics of Aristotle, at the end of Book VII ch 4, Aristotle sets down limits to how big and small a well-functioning state should be (1326b7-25). He thinks the ideal population is at least enough people to provide for the necessities of life, and the ideal land area is no larger than can be taken in a single view. Aristotle sees practical limitations to an oversize state, first of all, that foreigners and resident aliens could take advantage of the rights of citizens. Second of all, there is a limit to how many people can be communicated to at once. In Aristotle's day, communication to large audience was mostly through public speaking, since print was too expensive for bulk communication. And since there was no means of amplifying speech, one had to rely on the power of one's voice. If there are too many people, then those in the back won't be able to hear you, unless you have Stentor's voice (Stentor was a herald for the Greeks in the Iliad [cf V, 780], famous for his strong voice). The minimum population is simply enough for self-sufficiency, presumably so you don't have to depend on neighbors for basic necessities, like food.

The first part about foreigners, seems to continue to have relevance today. For example, in the US, foreigners can come here, have children, and then send them to free public schools, and they can show up at the Emergency Room and get free medical treatment (if they just never bother to pay the bill), as well as being able to take advantage of everyday things like free roads, military protection and fire protection service. Even such an innovation as ID cards and large databases to keep track of citizens don't settle this problem. The vast size makes anonymity, a boon for illegal immigrants in particular, possible. The second concern Aristotle has seems at first blush outdated, since we have technology of mass communication. But is it? Can a presidential candidate really address the needs to tens of millions of people? Maybe candidates resort more to vagueness and double talk and are less clear about their platform because they really can't and they want to leave it up to the voter to fill in the vagueness. Certainly presidents nor even Federal representatives, maybe not even local representatives can represent that many people. Trying to address so many different needs is impossible and thus blanket solutions that help some people pretty well, and many people poorly, and many people harms have to be resorted to. Maybe the number of people someone can speak to at once and the amount of land one can take in in one viewing is a good upper limit.

Over large states can be unwieldy. Rome had to be broken up into four units under Diocletian to keep from collapsing, and eventually settled into two distinct states. Many a large empire has simply fell into decline, heading quickly towards collapse (Akkadian, Hittite, Persian, Roman, Byzantine, Holy Roman Empire, Ottoman, USSR). Even today, most of the places with the highest standard of living and the highest level of prosperity per capita are small states (places like Hong Kong, Denmark, Norway, Switzerland, Lichtenstein, Andorra, UAE, Singapore and so on).

The lower limit that Aristotle sets, being able to be self-sufficient, seems unhelpful. An individual person can be self-sufficient, making all their food and clothes and such. In addition, an individual would be much poorer for being self-sufficient. I can afford my own apartment with lots of food and clothes and books and computer, etc, because I don't have to make them all, but can trade my services for them. The same applies to the state. A state is more prosperous by trade. Many of the examples I listed of prosperous small states are preeminently un-self-sufficient, like Hong Kong, with no natural resources whatsoever to offer and no agriculture. Trade has the additional benefit of reducing the likelihood of war, since countries mutually dependent on each other through trade, are disinclined to start pointing the guns at each other (is it a coincidence that the two countries the US is currently at war with are countries it has been openly restricting trade to, imposing economic sanctions on Iraq for years, and trying to prevent the only substantial trade with Afghanistan, recreational drugs, especially heroine, from entering the country). I guess as soon as we start to say things like "could there be a state with only one person, or two people, or four people?" that it seems feasible that there could be a lower limit to the size of a state. If you're too small it seems unnecessary to declare yourself independent. So what would the minimum size of state be?

Wednesday, October 15, 2008

Mixed Pleasure in Plato

In Book IX of Plato's Republic, Socrates makes a distinction between mixed pleasures and pure pleasures which is somewhat curious (583c-587a). The mixed pleasure is pleasure that is only really caused by the cessation of pain, for example if you've been holding it for a long time and then finally get to go the bathroom, it can be very pleasurable. Plato sets up three levels of pleasure: pain, a middle state of neither pleasure nor pain, and pleasure. This mixed pleasure would be reaching the middle state after previously being in pain. On the other hand, there is pure pleasure which is pleasurable in itself. Plato thinks the only things that would be fall under this category are reason and virtue. A person with unclouded access to true good and true virtue alone can experience pure pleasure. Mixed pleasure really is just the middle state between pleasure and pain that only seems pleasurable by contrast to pain, and pure pleasure is actually true pleasure and really is only something that the rational part of us will reach. Thus, all bodily pleasures are necessarily relegated to the realm of mixed pleasure, and thus only fall in that middle state between pleasure and pain and only seem pleasurable because they are preceded by pain. This requires one to assume that everything in the realm of the bodily is pain, except those treasured moments when we move up into the middle state, and only think it is pleasurable by comparison.

As one might imagine, this is a controversial position. Even just looking at it on the surface before reflecting on whether it is correct, it seems like Plato's general philosophical values (praising the rational and denigrating the bodily) has clouded his judgment and tainted his whole interpretation of the world. Plato of course, thinks that paying too much attention to the physical and turning away from the purely rational is the road to ignorance, but he is here showing that paying too much attention to the rational, can also lead to ignorance.

Admittedly, there are some pleasures which frequently do follow on pain, like the pleasure of relieving oneself, or the pleasure of eating when hungry or sleeping when tired, but certainly not all are like that. Unless one frequently mixes sex with spanking or whipping or torture or something like that (maybe Plato was really into BDSM), then orgasm is actually usually reached in fact after considerable pleasure. Unless Plato is going to argue that an orgasm is some sort rational pleasure or some form of true virtue, then I think he might have a problem. Even eating is pleasurable when you're not hungry. And going to the bathroom and sleeping have undoubted independent pleasure beyond just the cessation of pain. Pleasure that is merely the cessation of pain is distinct and discernible from true pleasure to anyone who pays attention and requires no special philosophical aptitude. Plato's motives of denigrating the bodily are clear, but this is a good case of being clouded by your values.

And the pleasure of thinking I'm suspicious of. It seems to me, that the only pleasure in thinking is in figuring things out, the pleasure of ending the pain of unsated curiosity, the pleasure of Eureka! moments, the pleasure of solving difficult problems; in short, the pleasure of overcoming a difficulty. I like it, but it has its limitations.

Saturday, October 11, 2008

What were we given Reason for?

In Kant's Groundwork of the Metaphysics of Morals, he presents an argument that we were given Reason for the cultivation of a good will (4:395-396). Kant had come to the conclusion that the only thing good in and itself is a good will, since all the other supposed virtues (like courage, moderation, wisdom, etc) can be used for unvirtuous ends. In order to get Reason a part in the construction of his moral philosophy he needs to prove that it is Reason that is there for the purpose of creating a good will. He starts by arguing that for any living thing, it is provide with no instrument (or tool or faculty) that is not best adapted to its end. In other words, if the end of my eyes is to see, they must be best constructed for the purpose of seeing. He then argues that our Reason, insofar at has a practical component, can only have been for the purpose of either preservation, welfare, happiness or the cultivation of a good will. Since the first three are best served by instinct, Reason therefore (by reductio ad absurdum) best serves the cultivation of a good will. Thus, it must have been given to us by our designer (God, presumably) to cultivate a good will. Thus, this is his plan for us, and we do best to abide by it, to develop this ethics here using Reason.

The first big assumption Kant makes is that all parts of us are best constructed for their ends. This was a frequent assumption at the time before Darwinism, but it is simply not well supported by empirical evidence at all. Animals aren't that well constructed, and humans certainly aren't that well constructed. We are amazingly complex and well-designed things, but to go out and argue that we are the best designed for our functions is obviously wrong. We've got imperfections and vestigial features and genetic disorders and so on. This is enough to dismantle Kant's argument, but the next argument is more interesting.

He next argues that instinct serves welfare, preservation and happiness better than Reason. That Reason is not the best faculty for ensuring preservation seems dubious. Instinct has that problem of being inflexible and not adapting well to unexpected circumstances. Reason is dependent on past experience, but it has the possibility of creative and surprising responses. Kant might respond, Reason requires a large brain, which is expensive in terms of time and resources, which might suggest it might not be best for survivability. We might respond, that the most Reason-heavy species, humans, is thriving through a wide range of environments on this planet, suggesting that the added expense may be more than compensated by the increased adaptability. And we could make similar argument for material welfare - Reason is better at maximizing material welfare for oneself and others.

But Kant also includes happiness as also better served by instinct than Reason, and he has some arguments. There is the argument about how thinking too much about how to attain happiness can get in the way of happiness. Or that looking too closely at one's happiness, can bring too much attention to the flaws. Or we might add to Kant's arguments that the ability to better consider alternatives increases the likelihood that you'll conceive of alternatives that are better than your current situation and be dissatisfied. In addition, Kant has to make the other side of the argument, that instinct would serve happiness better. If our instincts were specifically designed to only lead us toward things that would make us happy, then that certainly could be the case. It might not best serve material welfare or preservation, but perhaps instincts would lead us to greater happiness. Certainly, we don't seem like we are a species specifically designed for happiness, and that we could be better designed for that purpose if that was our end, and in such a case Reason would be superfluous.

Ultimately, the full argument falls flat, since living things are definitely not best designed for their ends, and Reason does seem to serve preservation and material welfare better than instinct. But Kant does raise an interesting possibility, that instinct and not Reason is the best road to happiness. Could Kant be in fact right?

Tuesday, October 7, 2008

One Single Architect

At the beginning of Part Two of Desccartes' "Discourse on the Method of rightly conducting one's reason and seeking the truth in the sciences" ("Discourse on Method" for short) he lays out an argument that a building constructed by one architect or a city laid out by the singular vision of one singular designer is far superior to on assembled from a hodgepodge of different designers or architects (CSM I 116, AT 11-12). What Descartes is after with these arguments is to contend that a philosophical system is better when one starts from scratch and builds it all oneself. Descartes is responding to Scholastic tradition, which approached philosophizing with a build-on-your-predecessors approach, in which argument from authority is an important part of their argumentation style. Descartes thinks his predecessors are unreliable, and so he wants to start over from the beginning. And he uses this argument as a further justification. The overall thrust of this section is that the guiding mind of one architect, whether it be an architect of buildings or of philosophical systems, is superior.

The problem is that empirical evidence doesn't bear Descartes' argument out. Let's take buildings for example. Let's compare some museums (and we'll try to make the comparisons fair, so we're comparing apples with apples): the Metropolitan Museum of Art in New York, a single building that's been expanded through numerous additions so that it's the work of a hodgepodge of architects vs. The Guggenheim Museum in New York, designed by Frank Lloyd Wright, of a piece and definitely representing one vision. You may have your preferences between the two, but neither is clearly superior. Or, if you compare the Louvre versus the Gehry Guggenheim in Bilbao it again is inconclusive. But let's look at more mundane examples of urban design: urban housing projects in the US vs. squatter communities in Brazil, India and Turkey. We're comparing apples with apples here: two communities from relatively equal socioeconomic classes. But the squatter communities are far superior: bustling, growing, rich with business and innovation, with a promising future. As opposed to American urban housing projects, which are crime-ridden dens of poverty, drugs and despair. Despite that these housing projects are in a far wealthier economy than the economies of Brazil, India and Turkey. Let's look at other examples. How about literature. Epic poetry: the Homeric epics, compiled by Homer from extant myths in a bardic tradition, and later refined through generations of spoken performance, vs. Virgil's Aeneid, written by Virgil, co-opting some Roman legends, but adding many stories of his own. To me, there's no comparison; Homer is far better, but maybe some would disagree. Or let's take movies: Casablanca, an extremely collaborative film with many contributors vs. The Clockwork Orange or Raging Bull, both considerably less collaborative (films are almost inevitably collaborative, considering the complexity of creating them). With films it's easy to come up with tons of exceptional highly collaborative films, but good examples of less collaborative films are fewer. But the real killer is when we try to look at scientific theories: science thrives by accumulation and the body of knowledge in any field is the work of many minor researchers and theorists alongside a handful of great ones. Maybe a nice unified set of theories, like Freudian psychology for example, has aesthetic appeal, but it can hardly compare in therapeutic success, explanatory power and predictive accuracy with the hodgepodge of psychological theories that now define contemporary psychology.

The reason the hodgepodge constructions are superior is because there is a selection and elimination process going on. The bad are weeded out, and that which is retained is retained because it is better. If one lives in a city with lots of old and new architecture, one can't help notice that the older architecture is better. This is not because people built things better back then. It's because the stuff that's still around is the stuff worth keeping. The shabby wooden hovels that most people used to live in have been weeded out, so that only the nice stone and brick houses that are sturdily constructed with lots of detail work are retained. Over time a city or any sort of work begins to accumulate the worthwhile stuff and thereby improves. The best of them are retained, and over time these hodgepodge works will supersede even the best designed works. So, despite what Descartes' says, we should be wary of the visionary architects of the future bearing grand plans.

Friday, October 3, 2008

Experts on axiology

Continuing on the theme that I began with my last post, where I evaluated Plato's arguments in the Meno that virtue can't be taught, I'm going to look at whether there are experts on virtue. Or in fact, experts on any values in general. Can one really be an expert on axiology? There are people with expertise in areas of axiology, but these people have more historical knowledge. They know about various theories, their strengths and weaknesses, arguments against them, history of debates, etc. But the question is can one be, for example an expert in ethics, or in beauty. Can someone be an expert on what is the best way to live, or what is the most beautiful.

This issue does also come up in Plato, in particular in the Protagoras, where Protagoras in his great speech basically makes the argument that all people are teachers of virtue and that a young person that needs to learn virtue, learns from the community in general, from the great many experts around. Similar to the way that you learn language: every native speaker is basically an expert and you learn how to speak your native language from a great many of these experts. Nonetheless, Protagoras wants to justify his position as an instructor in virtue, for which he charges fees and earns his livelihood, by saying there are some people that are somewhat more knowledgeable of virtue and thus qualify as teachers. The parallel with language is useful. Despite that all native speakers know how to use the language correctly, some know it better: some know the written language and understand punctuation better, some have a wider vocabulary, some use the language more artfully, some can better explain the nuances of grammar that most of us employ without understanding. In short, there's sort of a baseline of proficiency at which nearly all people attain, but some outstrip the rest. Someone learning a language can use any of these people to reach this baseline, but if one wants to go further, one needs the assistance of a better expert.

But this parallel, can mislead us. There definitely are people who live more virtuous lives, but yet all of us recognize that they live better lives. Would Mother Theresa be an expert on virtue? All of us can clearly see that she is a moral exemplar. All of us understand virtue well enough to know it when we see it. The difference is a matter of self-determination, discipline, will-power, a willingness to make sacrifices or such. It'd be reasonable to say that Mother Theresa could be a good expert on how to do these things and thus better actualize the virtue we already recognize, but is she an expert on virtue more than the rest of us.

This is relevant as some foist themselves forward as experts on virtue, or take effort to espouse their theories of virtue. But when it all comes down to it, despite that they prop themselves up as the experts, their ideas are juried by the masses and many of these so called experts are found wanting. Kant achieved an incredible achievement in forming his detailed systematic ethics and very carefully integrating it into his elaborate system. But no one really lives by his ethics. Consequentialism and virtue ethics are more a part of the moral choices of the average person, and deontology is mostly regarded as lifeless and inflexible. The important thing is that it really is the collective opinion of the masses that is the expert opinion.

Monday, September 29, 2008

Can virtue be taught

Reading through the Meno recently, which begins with the question can virtue be taught. Socrates in his conversation with Meno initially comes to the conclusion that virtue is a sort of knowledge and that as knowledge it can be taught. But then he rejects the view that virtue can be taught, because there are no teachers of virtue(93a-94e). Presumably everyone can be a teacher of virtue, if we take seriously Protagoras' Great Speech in Plato's Protagoras. But Socrates runs into the great problem that great virtuous exemplars don't seem to invariably (or even frequently) have great virtuous exemplars for children. I don't think we can argue with the empirical facts. I have noticed too that great artists don't tend to produce great artist children. And a logical explanation is that one simply cannot teach one how to be a great artist. It would seem that one's own children would be the first person that one would try to teach if one could, but since it's unteachable, then it makes sense that they wouldn't be able to match the standards set by their parents.

But there might be other reasons beyond just the simple explanation that virtue isn't teachable. For one, being a great virtuous exemplar or a great artist is a full time job (in fact, more than a full time job, an over-full-time obsession even) and teaching one how to be great in one's image is also a full time job. Simply, these great virtuous men cannot teach greatness because they don't have time. Socrates tries to dismiss this argument by saying that these great men would find teachers of virtue for their children in their stead, but this assumes that one could easily find such a teacher. And it entirely ignores the whole time issue: why would there be anyone whose not out doing great things and who has time to idle away teaching other people's kids how to be great? If such teachers existed they would be in hot demand, but if the best potential teachers are those who are doing the most great things, then they certainly have little time for teaching and certainly would prefer to focus their efforts on their own children.

Another factor is that teaching virtue may be a totally different thing than being virtuous. Teaching ain't as easy as it looks. Teaching someone something and having that lesson stick is not as simply as simply saying it. In the case of virtue, punishment is a important part of that teaching process, and it's easy to imagine that a person who's a good example of virtue may not know how to meet out judicious and effective punishment.

In addition we might also simply have a sampling bias of sorts. In other words, simply saying that great individuals can't quite bring their children up to their exceptionally high standards doesn't exactly prove that virtue can't be taught if their children are better than average, even though short of the same standard of greatness as their parents. In other words, great artists may not produce great artist children, but they frequently do produce talented and intelligent artists. They're just not quite as good as their parents, but they're still better than most of the rest of us. We might still be able to say that one can't teach one's children how to be a great artist or virtuous exemplar because really what defines that last extra quality that sets one apart as great is precisely an originality that one must discover on one's own. But, still perhaps virtue can be taught.

Thus, if we're to take Socrates' original arguments without this unconvincing problematization, it would seem virtue can be taught. Or can it?

Thursday, September 25, 2008

Healer's Fallacy

I want to look at a particular type of causal fallacy, which we might lump under "magical thinking," which I have dubbed the "healer's fallacy." I really can't think of a better name, though I'm open to suggestions. Let me explain it first. Since the body tends to recover on it's own from most minor illnesses, it becomes easy to confuse things that actually cure the disease with things that do nothing or only harm you slightly. For example, if you get a cold, then most likely you'll recover in a week or so. So, if you decided during that week to consume twice a day a shot of vodka mixed with mustard and Worcestershire sauce, you might be led to believe that it did some good. If you took a daily dose of homeopathic medicine, you might be led to the same conclusion. Maybe it might do some good as a placebo effect, but even if there is no placebo effect, it can sill appear like it cured your ailment. This can lead to confusions about what treatments are effective in the absence of carefully constructed experiments and propagate the myth of apparently effective home remedies. Thus, I would define the "healer's fallacy" as "a causal fallacy directly resulting from apparently affecting something that would happen anyways on its own." I use "on its own" very informally and broadly. For example, another good example, would be an orgonite cloudbuster (many videos on youtube of these things). The cloudbuster is basically a solid block of solidified chemical resin, containing crystals and metal shavings with some long copper tubes sticking out, that can apparently break up clouds when placed on the ground beneath them. Of course, the thing about clouds is that they are very fleeting and unstable and given time will always dissipate "on their own" (informally speaking). Thus, with a little patience, the cloudbuster will always seem to work, apparently confirming its effectiveness.

Economics is also prone to this healer's fallacy because economies tend to grow and improve as well. There are two reasons for this: 1) because two people will usually only engage in voluntary exchange if they both believe they are benefiting. The cumulative effect of lots of voluntary exchange is greater benefit for everybody. Even if people are sometimes mistaken about what will benefit them or sometimes willingly make sacrifices for the benefit of others, the economy will still grow because a) the cumulative effect of many exchanges is towards overall greater benefit and b) those who most often make exchanges which most benefit themselves will tend to grow richer and thereby become a larger part of the overall economy. The other reason economies grow is 2) because people tend to try to improve their situation. When they do this in the context of voluntary exchange, the general result is people symbiotically figuring out how to produce more with their finite allotment of time. They thereby earn more money with which to purchase more goods and services and also produce those goods and services in ways that require less money. Even those who don't try to improve their situation benefit as prices go down, since they can afford to buy more with the same amount of money, and thus can increase their real wealth.

This is important because it can seem like policies are improving the economy, when they have no effect or even a negative effect. The economy is like a boulder rolling down a hill. Attempts to speed it up are difficult and have little effect, whereas most policies end up simply getting in the way and slowing it down. The metaphors usually used are that the economy is like a car that needs to fueled, that needs to be "jump-started" when it slows, lest it stops, or can "run out of gas." But the economy never stops: people will exchange so long as they are able. The great impetus of the Industrial Revolution was not a positive push, but the removal of barriers. It is no coincidence that the Industrial Revolution began in a country with a very recent history of many philosophers advocating political philosophies of freedom.

This tendency of economies leads to certain epistemic problems when trying to understand economies, since it can be hard to isolate when something is helping it or preventing for growing as fast as it could. This is why economists usually look at how things change: what were conditions like before a policy went into place compared to what they were afterward. But this still limits the empirical breadth of economy and makes empirical evidence difficult.

Hume observed long ago about how we can't really observe causality, only constant correlation. What causality implies is universal correlation: that a will always lead to b. But we can never observe "always" only "all times in my experience." Thus, causal fallacies are always a threat when we attempt to understand the world and the type of magical thinking that leads to the healer's fallacy is an understandable, especially concerning certain phenomena.

Sunday, September 21, 2008

Lewis' Trilemma and the Reductio ad Absurdum

C. S. Lewis made a proof of Christ's divinity using a reductio ad absurdum, which appears in his book Mere Christianity (though he actually first uses the basic argument in a BBC interview in 1943). According to Wikipedia, Lewis was not the first to formulate this proof, but since he's its most famous exponent, we'll call it his.

Lewis sets up what he calls a "trilemma," a decision between three options, to prove that Jesus is divine. Jesus claimed that he was God which means that either he was speaking truthfully or untruthfully, and if untruthfully he either realized he was speaking untruthfully or didn't realize he was speaking untruthfully. This gives us three possibility: Jesus was either a liar, madman or God. Lewis originally formulated this argument to oppose those who think that Jesus is a good moral teacher, but not divine, since his status as good moral teacher would automatically eliminate liar and madman. Nonetheless, to improve the argument, Lewis actually made arguments that Jesus couldn't have been a liar or a madman, mostly resting on the assumption he wouldn't have had so many followers to follow him around and sacrifice their life for him, if he was a liar or a madman.

The first problem with this argument is the claim that he actually did say that he was God or the son of God. Christian Apologists who use this argument go to some length to establish this first point since 1) Jesus never says directly he is God, though he does seem to say it indirectly in a few places and 2) the sources for Jesus' words are both written well after the events of Jesus' life, in a language Jesus didn't speak, and by people who may not have even met him. If Jesus never did say that he was God or the son of God, then it leaves open the possibility that he was none of the three.

But even if we accept this, the argument still has difficulties. A good way to test an argument is to try to see if we can use it to prove things we don't want it to, as Gaunilo did with Anselem's ontological proof. Here, we notice that we can use the argument to prove that anyone making fabulous claims who can persuade people to follow them and sacrifice their life for them, are neither liars nor madmen. Therefore, Mohammed was the last prophet of God, Joseph Smith was told by God to found a new religion and divinely guided in his translation of the Book of Mormon, and L. Ron Hubbard was able to see into our planet's ancient history, and the story of the great massacre by Xenu. In fact, Jim Jones probably sets the record for persuading people to sacrifice themselves, persuading over 900 followers to give up their life in one day. Clearly, these various beliefs are irreconcilable, so Carrol's argument proves too much.

The fundamental problem is that the argues to gives much credit to Jesus' followers. These individuals are self-selecting and followed Jesus around because they genuinely believed his message (which seems to have been primarily focused on preparing ourselves for the immediate immanence of the second coming). As an itinerant preacher Jesus had contact with probably thousands of people, and yet we have no indication that he had more than a handful of followers. In fact, he seems to suggest that sometimes whole towns would complete reject his preaching. On top of that, one of his followers betrayed him, perhaps for the very reason that he began to realize Jesus was either a liar or a madman. Jesus' movement wasn't significant enough to be noticed by any historians until Josephus makes two brief mentions of Jesus in 94 ad, when the Christian movement had already had multiple generation to grow, and that growth was due to the persuasive ability of the subsequent followers, not of Jesus himself. In short, Jesus persuaded probably few people to follow him, and if we were to put the question to all those who had contact with him, then most of them probably did think he was neither a God nor even had the spiritual authority to be followed around. If we simply say, "those who followed him around, because they were willing to believe he was neither a liar nor a madman, were also willing to sacrifice their lives for him; therefore, they must have believed he was neither a liar nor a madman," then we have only made a circular argument.

We might even attack Lewis' first argument. There's no way we could automatically say that a liar or a madman couldn't be a great moral teacher. If a liar, he may have simply used this one noble lie in order to give greater authority to his teachings. And if his madness was limited to merely the status of his own divinity, he may still have been lucid on other critical moral matters, especially if he was simply repeating moral maxims directly from the Tanakh (Old Testament). In short, the Lewis' argument fails in several different ways, and certainly cannot replace faith.

Wednesday, September 17, 2008

Kant's Antinomy's

In Kant's 1st Critique, The Critique of Pure Reason, he employs four so-called "antinomies" to show the futility of using reason to answer certain unanswerable metaphysical questions: namely is the universe limited or limitless, does time have a beginning, is space infinitely divisible, are we free and is their extrinsic intelligence in the universe (eg God). What Kant does is that with each one of these debates he actually proves both sides. In other words, he starts by proving both that the universe is limited and that the universe is limitless, showing that both sides can be logically proven. Interestingly, Kant proves both sides of the antinomies with apagogic arguments, namely indirect arguments, using reductio ad absurdums. For example, to prove that time has no beginning he says, if time had a beginning, it must have been preceded by an "empty time" (a non-time before time). But how could time have emerged out of this "empty time"? Thus, time must be without beginning. But then he says, but if we assume that time has no beginning, then we're led to the assumption that it retreats into the past infinitely, which means it's taken infinite time to get to the present. But how could we possible pass through all this infinite time to get to the present? (Personally I'm not convinced by this latter argument, but let's overlook it for now).

But the really odd one of the four antinomies is the third one, about freedom. On the one hand, there is no freedom, then all events are part of a causal chain of cause and effect, whereby each event is caused by a previous, caused by a previous, caused by a previous, ad infinitum. The explicit problem he presents with this is that it prevents us from fully determining the cause of any event, because we can't determine every antecedent cause and cause of that cause and cause of that cause, etc. On the other hand, for the other side of the antinomy, he refutes the possibility of freedom. He says that if there are free acts, then these acts must occur outside the causal chain, which means that we can't understand them using our understand by means of uniform laws. Thus, we really can't have experience of free acts.

In both cases, the argument against relies on this idea of us being able to have experience. It is somewhat sensible because the larger goal of the antinomies is the refutation of "Transcendental realism," which is the idea that the world is at it appears. He wants to say that if the world is at it appears, then we should be able to have complete experience of the world. But of course this doesn't follow because we don't see the whole world, but only small pieces of it, meaning it would take infinite time to experience the whole of it. This means that an infinite regress of causes is compatible with transcendental realism. And of course, the other side of the argument doesn't work that well either. I of course, can have experience of freedom, I just may not immediately assume it is free, since I can assume that it's part of a causal chain. The problem with proving freedom through reason is not that both sides of the argument lead to absurdity, but that neither side leads to absurdity, making the issue irresolvable.

Ultimately, Kant will want to say that freedom is provable by means of practical reason. The argument that he presents in his 2nd Critique, The Critique of Pure Practical Reason, is basically that I have awareness of moral choice via practical reason (thereby assuming I have practical reason to begin with). For me, to have moral choice requires freedom. Therefore I have freedom. Of course, to say that I have accurate awareness of moral choice is a big assumption. Might as well just assume you have freedom and avoid the smoke and mirrors of trying to create the appearance of a legitimate argument.

Saturday, September 13, 2008

What's Aristotle to do without inertia?

Let's start looking at the reductio ad absurdum (which I explained in my last post) with Aristotle, since he liberally uses it as a logical technique not always most carefully. Aristotle is a careful thinker, but with the shear quantity of arguments he made in his dense body of surviving works there's bound to be more than a few stinkers. Now that the Large Hadron Collider has been recently started up, in commemoration of the fact that the earth has not been swallowed into nonexistence, let's talk about some of Aristotle's physics. This is a particularly good topic as well, since it is one place where Aristotle makes some of his notoriously odd conclusions.

One problem that Aristotle deals with in his physics is the conservation of motion. Undoubtedly it happened - all types of projectiles were thrown in warfare in the Olympics and just in the everyday fun of throwing stones at your friends to harass them - but how did it retain its motion after it left the hand or bowstring or whatever? Without the concept of inertia this could be a puzzling problem. Aristotle, brings up this problem in his discussion of the void, which he rejects for a number of reason. In this discussion, in Book IV.8 (215a14-19, he thinks that there can be only two possibilities, that either 1) the thrown object causes some sort of cyclonic motion, whereby it displaces air in front of it, and it circles behind it and pushes it forward or 2) the the thrown object pushes forward a column of air in front of it at a greater speed, and the column of air drags the arrow behind it. The second one seems completely implausible since it still demands explanation of how the column of air continues in motion, which is good reason to reject it and accept (1). But ultimately, it's silly to imagine that these are the only two possibilities.

In fact, his explanation of how a thing keeps moving seems to contradict his argument that a void is impossible because objects would move within it at infinite speed (215b1-216a8). He basically says that the speed of an object is equal to force divided by the resistance of the medium (f/r). This first of all doesn't make sense, since if this cyclonic motion were pushing the object, the increased resistance in front would be more than made up by increased push from behind, meaning objects would move equally fast through all mediums (which contradicts experience). But Aristotle argues that a void is impossible because it would offer no resistance, making r=0, producing f/0, which equals infinity. This doesn't follow because one would only have to make a small adjustment in the formula (eg. f/(r+c) where c is a positive number) to avoid infinities and still agree with empirical observation (and yes, I know the Greeks didn't use algebra, but it's so much easier to use to explain to contemporary audiences. One could easily express it geometrically too).

In short, this is the best argument for the nature of motion that Aristotle could make.

Tuesday, September 9, 2008

Reductio ad absurdum

The "reductio ad absurdum" is a type of apagogic (indirect) proof which proves something true by eliminating all other possibilities. The phrase "reductio ad absurdum" means "reduce to absurdity," and refers to the process of reducing to absurdity all other possibilities until there is only one left. The reductio is used quite frequently by Euclid in his Elements and in geometry in general. It makes good sense as a tool in geometric proof, but many a philosopher has tried to co-opt it into philosophical proofs, which is not quite as air tight.

So, basically, to show how a reductio works, if you want to prove that point A falls on the circumference of circle Y, then you can use a reductio and say that point A either is inside circle Y, outside of circle Y or right on circle Y. If you demonstrate that both the possibility that point A is outside or inside circle Y lead to absurdity - namely some sort of paradox - then it follows by process of elimination, that point A must fall on circle Y. Two critical assumptions are always being employed when one uses a reductio: 1) that the correct solution is among the possibilities listed and 2) that all possibilities are included in the possibilities listed (we should note, that if condition 2) is met, then condition 1) necessarily follows, nonetheless, I think it is important to note them as two separate conditions since if condition 1) is met then one can come up with a correct answer, even if condition 2) is not met). In the case of the geometrical proof we can be confident that we have all possibilities. Within our simplified space only occupied by circle Y and point A, there can be only three possibilities. Geometry, and in particular Euclidean geometry involves a very simplified space, built from the ground up from a finite set of definitions and axioms. The geometrical space in which geometrical proofs occur is finite and circumscribed. Thus, we can say, because we are perfectly aware of all the limits and rules of this space, what are all the possibilities within a reductio proof.

But what happens when we are philosophizing on topics that concern the wider world? We can neither create a finite set of all definitions nor a finite set of all fundamental axioms for the wider world, and thus for us to confidently assert that the possibilities listed are all possibilities is questionable. It's not like reductios aren't at all possible outside the well circumscribed confines of geometric space, but it certainly is more uncertain, more difficult, and more open to skepticism.

So, I make this entry as a preface to some discussions of some reductio arguments in philosophy.

Next post, Aristotle

Friday, September 5, 2008

Nasty, Brutish & Short ain't so bad

Thomas Hobbes, in his Leviathan made the case for strong central government on the grounds that it is necessary to evade the dangers of the hypothetical state of nature, in which there is no government. In this state of nature, life is "solitary, poor, nasty, brutish and short" because it's war of all against all with no one to prevent people from using force to get their way. The lesson I was taught when young and impressionable was that in a state without government, it would be simply the strong dominating the weak. Sounds reasonable, but unfortunately Hobbes is wrong. Peter T. Leeson writes an article for cato-unbound.org, about how anarchy isn't as bad as you think, and there are a number of historical examples to back it up. I think the general error that Hobbes makes is basically that we can see from our perspective that a life that is nasty, brutish and short is undesirable (in fact this is the rhetorical thrust of his argument) and yet we assume that the people living in this situation can't see it, or are incapable of doing anything about it. But the fact is they could both recognize how undesirable it is, and figure out ways to address this problem. One error many a political scientist and lawmaker has faced is underestimating how surprisingly creative people are - give them a law they don't like and they'll find a way around it, and likewise give them a state of nature and they'll figure out a creative way to make the best of it. The Leeson article is good at showing some of these creative solutions.

The error in Hobbes is not so much that he misunderstands human nature and thinks that people are more vile in nature than they really are. In theory, a few vile people could really undermine attempts to live in peaceful harmony, since they would take advantage of the whole system. So, whether you only had a few vile people or most people were vile, the results would probably be similar. His error is assuming that government is the only way to curb the behavior of these vile people. Government is one solution, but there can be many others.

In addition, we can admit how power can corrupt, and how if any of us were thrown into a situation without laws, temptation might lead us to do things we otherwise wouldn't. For example, if I was given complete legal immunity, I might be very well tempted to steal, in order to save money and to get stuff that I usually couldn't afford. Plato gives a similar example of Gyges' ring in the Republic. Gyges finds a ring that makes him invisible and suddenly he uses it to seduce the queen, kill the king and take over the kingdom (359d-360d). We might think of the movie Jumper, as a recent fictional example, wherein the ability to teleport anywhere instantly, leads the character to live lawlessly. But the confusion here is speaking of individuals getting unique powers, whereas the state of nature, applies equally to everyone. If everyone received a ring making them invisible, or everyone could teleport, then it might undermine the fabric of society, but only temporarily since people would figure out ways to cope. People would come up with surprising ways to protect their person and property and recreate guarantees that make trade and community possible, and thus allow people to attain prosperity.

I couldn't say how, but that comes back to the surprising creativity that emerges when you ask a whole lot of people to try and solve a problem. They'll try things out and some will work, and word of those successful ones will get around, and very quickly institutions will emerge that would astound even the most brilliant minds like Hobbes and Plato.

Monday, September 1, 2008

Parmenides refutation of change

Parmenides was a pre-Socratic philosopher from Elea. He is notorious for denying that there can be any change. He believed that everything is part of a single unified and unchanging whole. All apparent change is merely illusion. His follower, Zeno, extended this idea by providing further logical paradoxes which attempted to show that motion leads to essential contradictions that are logically irreconcilable. For example, he showed that motion isn't possible because in order to travel from A to B we have to travel half the distance, and then half that distance and then half that distance, and an infinite number of halves. But if we have to cross an infinite number of halves, how can we even get started? Aristotle, simply rejected this argument on the grounds that we can observe things in motion, but this isn't very effective because Parmenides already argument that motion is an illusion. The difficuly in this paradox is that of the infinite, which Greek mathematics couldn't handle, and it would require a far more sophisticated relationship with infinity in mathematics for this problem ultimately to be solved.

Parmenides' argument for lack of motion was twofold. First, he argued that for change to occur it must progress from being to non-being, since something which was not before now is. For example, if I grow tall, I have to start from not-tall and then change to tall. But how could something possible come from nothing? How could being come from nothing, since nothing is completely nothing? After Parmenides, thinkers would recognize that this absolute change, (something from nothing) is not possible, but change is possible because things don't need to change completely. There is something that persists through the change. For example, if I grow tall, it is I who persists through the change. Tall to non-tall is not absolute change, because the I is the unchanging ground upon which the ball of change can roll.

Parmenides other argument is about the incomprehensibility of non-being. A world in which there is change requires a combination of being and non-being, but we can't possibly comprehend non-being since it is absolutely nothing. Thus, the comprehensibility of the world would be undermined by change. Parmenides here was again mistaken since the presence of change only undermines the complete comprehensibility of the world, which is an unfortunate fact which all we people who would like to know more about the world have to deal with.

I think the essential lesson to learn from Parmenides is the danger of thinking only with absolutes. Parmenides assumed that all change must be absolute change and so rejected change altogether. He assumed that for the world to be comprehensible it must be completely comprehensible. He will not be the last philosopher to make this error. For example, if one were to say, since we can't have complete access to truth, then we can't know anything.