Those eagerly awaiting part two will have to wait until tomorrow.
h.
Monday, March 23, 2009
Sunday, March 22, 2009
Thursday, March 19, 2009
Filmosophy: Deconstructing the Wall
When: Friday, March 20, 2009
Where: Hardtner, Room 128
What: Dr. Furman will discuss existential issues in Pink Floyd's The Wall.
Check out the poster here
Sunday, March 15, 2009
Two Dogmas of Empiricism
By Hanno
T. Furman asked me to write up a description of the classic article "Two Dogmas of Empiricism" by Willard V.O. Quine, perhaps the greatest American born and bred philosopher. From Quine's first classic "Truth by Convention" in 1936 to a slew of classic articles in the '50's and '60's, Quine's work in philosophy and logic shaped a generation. Both of those articles contain criticism of one of the most powerful, lively and influential philosophical movements the Western world has seen, Logical Positivism. Developed by German thinkers in the 1920's and '30's, Logical Positivism had many roots, but contained a criticism of philosophy as it had been practiced before and during the 20's. A collection of like minded intellectuals gathered frequently in Vienna and were called "the Vienna Circle." Many of the thinkers opposed the rise of the Nazi's in Germany, and had to flee when the Nazi came to power both in Germany and in Austria. Some went to England, but many of the most influential went to the USA. These included Gustav Bergman, who landed in Iowa, where he taught two of my professors at the University of Texas, and Rudolf Carnap, perhaps the greatest of the lot, as well as a socialist and pacifist, landed at Harvard, where Quine also taught. Carnap had met Quine earlier, and had already formed a close connection. Quine's criticism of Logical Positivism focuses on Carnap's version. While good friends, they disagreed about many things, yet both influence the other's work, as each responded to the arguments of the other.
Logical Positivism has two primary components, and could only arise after developments in both science and logic. At its head is a belief in empiricism: that all knowledge is to be derived from experience. Empiricism had long had difficulties explaining our knowledge of mathematics. Knowledge of such necessary and universal truths were clearly not empirical. While Hume did not realize the difficulties empiricism faced, and so waved off math as simply being about relations of ideas, and hence simply part of logic, Kant pointed to some difficulties. Kant argued that sentences fall into one four categories based on a matrix of two by two: They are either analytic or synthetic, that is either made true in virtue of the meaning of the parts of the sentences as opposed to sentences which go beyond the content of the subject. In the sentence 'Tigers are mammals,' the subject 'Tigers' does not contain the predicate 'are mammals,' but in the sentence 'Bachelors are unmarried men,' the subject does seem to contain the predicate. We say, that's just what it means to be a bachelor. The other parameter of the matrix is that sentences are either known empirically or a priori. Experience tells us that things are such and such, but not that they must be. Anytime some necessary claim is known, they must be known independent of experience, because experience simply cannot ground necessity.
Math then is the first exception to empiricism for Kant: They are necessary truths, and hence known a priori, but they are also, he argued, not analytic claims. In particular, denying '2+2=4' does not create a contradiction, certainly not until you have a defintion for '2' or '4.' On the face of it, '4' does not contain '2+2.' Denying that the shortest distance between two points is a straight line similarly creates no contradiction, nor does the idea of a line contain 'shortest distance between two points.'
Frege showed, however, that this was the product of not understanding mathematics clearly. In particular, with a more powerful logical system together with naive set theory and clear definitions of what the number one actually is yields a system which answered Kant's problems. In doing so, Frege showed that you could conceive of arithmetic as merely part of logic, and that Hume was right in the end. Notice, Hume was not right, but simply asserting dogmatically, that arithmetic were simply relations of ideas. In the logic of his day, that simply was not true. There was no way to prove most of what mathematicians were studying using Aristotelian logic. Other thinkers soon followed showing that geometry could also be treated as a mere part of logic: Logic plus definitions yields all of math. Principal among these thinkers was Bertrand Russell, and the first effort at this was his classic: The Principles of Mathematics.
This then was one leg of Logical Positivism: Mathematics is simply a part of logic, and following Wittgenstein, logic does not give facts about the world, but simply describes our use of certain symbols. In other words, since mathematics does not describe any real truths, it is not a serious objection to empiricism. It is this view that Quine takes to task in "Truth by Convention."
The other side of Logical Positivism is empiricism. Actual questions about how the world is must be tried to experience. Now again, Hume had stated that the meaning of a word is the combination of sense impressions. Though Hume does argue for this claim, the argument is not very good. Indeed, his argument against the idea of 'cause' is a case and point: Hume argues that all words must be tied to sense impression to have meaning, and that cause is not tied to an impression, so that the word 'cause' has no meaning. But early he tells us that his view that all words are tied to sense impressions rest on an argument: show me a word that is not tied to an impression, and it is up to me, if my view is right, to show how it actually is tied to an impression. He proceeds to do just that with God, for example. By the time he gets to cause, his believe that the meaning of a word is a combination of sense impressions is dogma. There he declares that the word 'cause' is meaningless because there is no impression from which to derive the idea of cause, and hence the word has no meaning.
But it is dogma that is doing real philosophical work. Now Frege, in his work on logic, argued that the meaning of words is a red herring, that the real source of meaning was the sentence. Words only have meaning in the context of a sentence, and thinking of words as the primary barer of meaning creates confusion. You start to think that properties are real things, when in fact properties are incomplete ideas that become complete when in a sentence. Frege coached to "never ... ask for the meaning of a word in isolation, but only in the context of a proposition." (Foundations of Arithmetic). Wittgenstein accepts that, and the positivists also accept that as well.
No longer would it matter if each term in a sentence is tied to a sense impression, but whether the sentence as a whole is tied to experience. But to which experiences? That part the positivists differ, but the most memorable of them was the verificationist principle of meaning. This can be fleshed out in two ways, the first less specific than the second. In general, verificationism holds that a sentence is meaningful if and only if it is either a proposition of logic (a tautology) or if there is some sense experience which could lead one to accept it as true. For claims about the world, this is especially important, and they used this principle to banish bullsh*t from philosophy. If a sentence cannot in principle be verified by experience, then the sentence was not really a proposition at all, but a pseudo-proposition. It sounds like it says something, but it does not. So claims about causal connections are legitimate if there is some experience which would lead someone to accept or reject the claim, even if the idea of cause is not a copy of an impression. Other claims, like "The Absolute enters into, but is itself incapable of, evolution and progress," are meaningless. No one has the slightest idea what experience would lead one to accept such a claim.
But why would verificationism be true? The basic idea is that it is irrational to argue about things that in principle no reason or experience can show to be either true or false. That cleave is a chasm: either reason has something to say (and hence logic will clear the air) or experience has something to say (science) or the claim is meaningless, a pseudo proposition. Used in the hands of a master, this doctrine becomes an executioner's blade, slicing heads off.
What shows us that claims are meaningless, however, if neither reason nor experience can undermine it? Answer: if the meaning of the sentence itself is its method of verification! Then it follows that a sentence that has no method of verification, if not a tautology, is meaningless. And now you can see the work done by Logicism, the view that mathematics simply is a branch of logic: if that were not true, then math, too, would be banished as a pseudo proposition, something so wholly absurd, no one would accept it.
These are then the two dogmas of empiricism: that statements can be divided into analytic claims on the one hand and synthetic claims on the other, and that sentences mean their method of verification (or falsification).
Quine will show the second claim to be false. He will use that to undermine the first claim. And that will dull the edge of the executioner's axe.
T. Furman asked me to write up a description of the classic article "Two Dogmas of Empiricism" by Willard V.O. Quine, perhaps the greatest American born and bred philosopher. From Quine's first classic "Truth by Convention" in 1936 to a slew of classic articles in the '50's and '60's, Quine's work in philosophy and logic shaped a generation. Both of those articles contain criticism of one of the most powerful, lively and influential philosophical movements the Western world has seen, Logical Positivism. Developed by German thinkers in the 1920's and '30's, Logical Positivism had many roots, but contained a criticism of philosophy as it had been practiced before and during the 20's. A collection of like minded intellectuals gathered frequently in Vienna and were called "the Vienna Circle." Many of the thinkers opposed the rise of the Nazi's in Germany, and had to flee when the Nazi came to power both in Germany and in Austria. Some went to England, but many of the most influential went to the USA. These included Gustav Bergman, who landed in Iowa, where he taught two of my professors at the University of Texas, and Rudolf Carnap, perhaps the greatest of the lot, as well as a socialist and pacifist, landed at Harvard, where Quine also taught. Carnap had met Quine earlier, and had already formed a close connection. Quine's criticism of Logical Positivism focuses on Carnap's version. While good friends, they disagreed about many things, yet both influence the other's work, as each responded to the arguments of the other.
Logical Positivism has two primary components, and could only arise after developments in both science and logic. At its head is a belief in empiricism: that all knowledge is to be derived from experience. Empiricism had long had difficulties explaining our knowledge of mathematics. Knowledge of such necessary and universal truths were clearly not empirical. While Hume did not realize the difficulties empiricism faced, and so waved off math as simply being about relations of ideas, and hence simply part of logic, Kant pointed to some difficulties. Kant argued that sentences fall into one four categories based on a matrix of two by two: They are either analytic or synthetic, that is either made true in virtue of the meaning of the parts of the sentences as opposed to sentences which go beyond the content of the subject. In the sentence 'Tigers are mammals,' the subject 'Tigers' does not contain the predicate 'are mammals,' but in the sentence 'Bachelors are unmarried men,' the subject does seem to contain the predicate. We say, that's just what it means to be a bachelor. The other parameter of the matrix is that sentences are either known empirically or a priori. Experience tells us that things are such and such, but not that they must be. Anytime some necessary claim is known, they must be known independent of experience, because experience simply cannot ground necessity.
Math then is the first exception to empiricism for Kant: They are necessary truths, and hence known a priori, but they are also, he argued, not analytic claims. In particular, denying '2+2=4' does not create a contradiction, certainly not until you have a defintion for '2' or '4.' On the face of it, '4' does not contain '2+2.' Denying that the shortest distance between two points is a straight line similarly creates no contradiction, nor does the idea of a line contain 'shortest distance between two points.'
Frege showed, however, that this was the product of not understanding mathematics clearly. In particular, with a more powerful logical system together with naive set theory and clear definitions of what the number one actually is yields a system which answered Kant's problems. In doing so, Frege showed that you could conceive of arithmetic as merely part of logic, and that Hume was right in the end. Notice, Hume was not right, but simply asserting dogmatically, that arithmetic were simply relations of ideas. In the logic of his day, that simply was not true. There was no way to prove most of what mathematicians were studying using Aristotelian logic. Other thinkers soon followed showing that geometry could also be treated as a mere part of logic: Logic plus definitions yields all of math. Principal among these thinkers was Bertrand Russell, and the first effort at this was his classic: The Principles of Mathematics.
This then was one leg of Logical Positivism: Mathematics is simply a part of logic, and following Wittgenstein, logic does not give facts about the world, but simply describes our use of certain symbols. In other words, since mathematics does not describe any real truths, it is not a serious objection to empiricism. It is this view that Quine takes to task in "Truth by Convention."
The other side of Logical Positivism is empiricism. Actual questions about how the world is must be tried to experience. Now again, Hume had stated that the meaning of a word is the combination of sense impressions. Though Hume does argue for this claim, the argument is not very good. Indeed, his argument against the idea of 'cause' is a case and point: Hume argues that all words must be tied to sense impression to have meaning, and that cause is not tied to an impression, so that the word 'cause' has no meaning. But early he tells us that his view that all words are tied to sense impressions rest on an argument: show me a word that is not tied to an impression, and it is up to me, if my view is right, to show how it actually is tied to an impression. He proceeds to do just that with God, for example. By the time he gets to cause, his believe that the meaning of a word is a combination of sense impressions is dogma. There he declares that the word 'cause' is meaningless because there is no impression from which to derive the idea of cause, and hence the word has no meaning.
But it is dogma that is doing real philosophical work. Now Frege, in his work on logic, argued that the meaning of words is a red herring, that the real source of meaning was the sentence. Words only have meaning in the context of a sentence, and thinking of words as the primary barer of meaning creates confusion. You start to think that properties are real things, when in fact properties are incomplete ideas that become complete when in a sentence. Frege coached to "never ... ask for the meaning of a word in isolation, but only in the context of a proposition." (Foundations of Arithmetic). Wittgenstein accepts that, and the positivists also accept that as well.
No longer would it matter if each term in a sentence is tied to a sense impression, but whether the sentence as a whole is tied to experience. But to which experiences? That part the positivists differ, but the most memorable of them was the verificationist principle of meaning. This can be fleshed out in two ways, the first less specific than the second. In general, verificationism holds that a sentence is meaningful if and only if it is either a proposition of logic (a tautology) or if there is some sense experience which could lead one to accept it as true. For claims about the world, this is especially important, and they used this principle to banish bullsh*t from philosophy. If a sentence cannot in principle be verified by experience, then the sentence was not really a proposition at all, but a pseudo-proposition. It sounds like it says something, but it does not. So claims about causal connections are legitimate if there is some experience which would lead someone to accept or reject the claim, even if the idea of cause is not a copy of an impression. Other claims, like "The Absolute enters into, but is itself incapable of, evolution and progress," are meaningless. No one has the slightest idea what experience would lead one to accept such a claim.
But why would verificationism be true? The basic idea is that it is irrational to argue about things that in principle no reason or experience can show to be either true or false. That cleave is a chasm: either reason has something to say (and hence logic will clear the air) or experience has something to say (science) or the claim is meaningless, a pseudo proposition. Used in the hands of a master, this doctrine becomes an executioner's blade, slicing heads off.
What shows us that claims are meaningless, however, if neither reason nor experience can undermine it? Answer: if the meaning of the sentence itself is its method of verification! Then it follows that a sentence that has no method of verification, if not a tautology, is meaningless. And now you can see the work done by Logicism, the view that mathematics simply is a branch of logic: if that were not true, then math, too, would be banished as a pseudo proposition, something so wholly absurd, no one would accept it.
These are then the two dogmas of empiricism: that statements can be divided into analytic claims on the one hand and synthetic claims on the other, and that sentences mean their method of verification (or falsification).
Quine will show the second claim to be false. He will use that to undermine the first claim. And that will dull the edge of the executioner's axe.
Thursday, March 12, 2009
Ockham's Razor and Descartes

Over on the philosopher's magazine blog, an interesting post appeared concerning Descartes Vegetarianism?! According to Bloodless Revolution, a history of vegetarianism by Tristram Stuart, Descartes was a vegetarian. This is very surprising because Descartes is famous for his view of animals as mere machines.
From the Internet Encyclopedia of Philosophy:
One of the clearest and most forceful denials of animal consciousness is developed by Rene Descartes (1596-1650), who argues that animals are automata that might act as if they are conscious, but really are not so (Regan and Singer, 1989: 13-19). Writing during the time when a mechanistic view of the natural world was replacing the Aristotelian conception, Descartes believed that all of animal behavior could be explained in purely mechanistic terms, and that no reference to conscious episodes was required for such an explanation. Relying on the principle of parsimony in scientific explanation (commonly referred to as Occam's Razor) Descartes preferred to explain animal behavior by relying on the simplest possible explanation of their behavior. Since it is possible to explain animal behavior without reference to inner episodes of awareness, doing so is simpler than relying on the assumption that animals are conscious, and is therefore the preferred explanation.
To be fair, Descartes was a vegetarian for mostly health, not ethical, reasons. However, Stuart suggests that Descartes was disturbed by the problems surrounding sentience. If animals did feel pain, then it would be wrong to kill them for food (see Singer: Animal Liberation 1975).
An easy way around this puzzle is to simply assume that animals are not conscious, feeling no pain. Moreover, this is the simplest argument. However, as Todd pointed out in his post, the simplest argument is not always the right argument. Animal consciousness is a hotly debated topic in animal ethics (see the Stanford Encyclopedia of Ethics entry on Animal Consciousness for a primer). Ockham's razor, much like faith, shouldn't be used as a cognitive default to prevent us from exploring the complexity of the world around us.
~guybrarian
Monday, March 9, 2009
Ockham's Razor and Truth
By T. Furman
Gimbel tells us that Plantinga’s attempt to show that Christianity and evolution are compatible is a red herring. After all, supposing that Christianity is compatible with evolution, one must then decide which theory to endorse. And this is supposed to be a no-brainer given Ockham’s Razor: Evolutionary theory is simpler than the Christian competitor, evolution directed by a divine will.
I agree with this to a point, so long as the evolutionist isn’t actually claiming more than she is entitled to. Ockham’s razor doesn’t tell us which theory is actually true. So, if this is what the evolutionist is actually pushing for, when she argues that Evolutionary theory is to be preferred, then I protest the over reaching conclusion.
And here is a funny thing. Just suppose that God did create the universe and guided evolutionary selection. The universe would probably look much as it does now. But notice this, given our scientific deference to all things empirical, our approach to understanding our origins would preclude us from hitting on the truth of the matter. And I think that scientists ought to really ponder this fact, as it shows that science is not as objective as it is usually made out to be.
Finally, there might be a reason for preferring Christian/Evolutionist view of things over the straight Evolutionist as the Christian Evolutionist can explain certain phenomena that the plain Evolutionist can’t: Miracles.
Saturday, March 7, 2009
The Bookmobile: Defining the Information Poor

~guybrarian
This blog post was selected for publication in the Journal of Bloglandia Vol. 2 No. 1
INTRODUCTION
Information is the buzz word of the 21st century. Social scientists have prophesized that we’ve suddenly become an “information society” with an “information economy.” Drawing on this model, Al Gore, arguably, coined the term “information superhighway” referring to the Clinton/Gore administration’s plan to deregulate communication services and widen the scope of the internet. But as a nation America has always been a country which prided itself on its information-delivery channels, from public schools to the postal service. And perhaps the best-recognized repositories of our society’s information are its public libraries.
From their earliest inception in the mid-1800s, public libraries were idealistically conceived as places where American democracy would flourish as all citizens enjoyed equal access to the abundance of the world’s collected record of human knowledge. In reality, however, these institutions were often created by and operated for the Anglo-Saxon, educated middle classes. Whether intentionally or not, library holdings, furnishings, programs, and even hours of operation all sent a powerful message about who controlled access to information in our society and provided the basis for defining the information rich and the information poor.
Outreach, as defined by the Dictionary of Library and Information Science, is: a library public service program initiated and designed to meet the information needs of an unserved or inadequately served target group. The bookmobile, from its inception, embodied this service mission. As a corollary, the library materials and driving route for a bookmobile provide fertile ground for analyzing the information poor in a community. A study of issues surrounding bookmobile service should provide a stark depiction of the powers of and limits to public libraries under the most democratic of intentions. And in exploring the information carried by the bookmobile, as well as the patrons served by it, the continuing role of the bookmobile as a pivotal resource for providing information to inadequately served populations may find renewed interest as an agent in closing the digital divide.
This paper is divided into four main sections, each exploring issues surrounding the efforts of bookmobiles in serving the information poor nationally:
• Brief History of the Bookmobile Program
• Bookmobile Patrons
• Bookmobile Holdings
• Future of the Bookmobile Program in America
BRIEF HISTORY OF THE BOOKMOBILE PROGRAM IN AMERICA
The birth of the bookmobile in the United States took place around the turn of the century (circa 1900) at the Washington County Free Library in Hagerstown, Maryland. In order to service their 66 remote “deposit stations” in stores and churches, each holding around 35 volumes, they hired a horse and wagon to carry books back and forth to these stations three times a week. For its time, this was considered the pinnacle of innovation in terms of library extension services.
Shortly after establishing the program, one rather astute librarian proposed in 1905 that the library purchase their own horse and wagon, adding shelves so that it could serve as a surrogate “branch library” itself, in addition to its routine deposit station deliveries. As an added bonus, this “book wagon” would also serve as a decorative symbol of “advertisement” for the main library. A year later in 1906, Melvil Dewey advocated for a theoretical model of what he called “field libraries,” where a ““traveling librarian would give a day or two each week or month to a locality too small to afford his entire time.” In 1912, the Washington County Free Library constructed what would become the first true bookmobile (abandoning deposit station deliveries), a custom built International Harvester Autowagon. It was reported that “the bookmobile carried 2,500 volumes” — more than all the deposit stations combined — “and covered a 500 square mile territory in a place where there was virtually no high school.”
The idea of a “mobile library” spread quickly across the country as it provided an inexpensive way for libraries to service the information poor. According to Eleanor Francis Brown, a custom-made bookmobile could be purchased for under $1000 in the early 1900s. In 1915 the town of Hibbing, Minnesota developed what would become the standard prototype for bookmobile production for the next 75 years. In order to combat bitterly cold winters, the library designed the first “walk-in” bookmobile, complete with a coal stove.
By 1937, bookmobile production in the United States was on the increase with 60 bookmobiles in existence at the time. The popularity of the service forced the American Library Association to provide guidance for libraries wishing to acquire a bookmobile, emerging with an advisory volume called Book Automobiles in 1937. Unfortunately, this advisory volume was little used as bookmobile production and service was put on hiatus as the country experienced the Great Depression and both WWI and WWII during much of the 1930s and 1940s.
After WWII, the American economy improved and the trend was definitely towards growth. New companies emerged to manufacture specifically-designed recreational vehicles that could accommodate whole classes inside at one time. From 1956 to the end of the 1960s, bookmobiles experienced their most rapid growth both in rural and urban areas. This expansion is largely due to the initiation and amendment to the Library Services Act (LSA). In 1956, the LSA p provided $40,000 to each state that complied with its provisions - most notably the extension of library services to rural areas of populations of 10,000 or less. As a result, a third of federal and state money was spent on bookmobiles, which were said to have “improved service for over 30 million rural people and provided new service for another 1.5 million.” In 1964 Library Services and Construction Act removed the population limits of the previous Library Services Act, meaning that now urban areas were also eligible for funding. The idealism of the 1960s, matched with federal funding provided the backdrop for the “golden age” of the bookmobile.
Unfortunately, the 1970s were the beginning of a steady decline in mobile
services. According to Catherine Alloway, in her book entitled The Book
Stops Here: New Directions in Bookmobile Services, “many idealistic outreach
goals and programs of the morally charged sixties came to a screeching halt
with the financial woes of the seventies, even though gasoline was always a
relatively small portion of any bookmobile’s budget.” The negative influence
would have a lasting impact, wrote VanBrimmer: “The fuel crisis began to ease by 1982, but the cost of fuel remained inflated and the startup costs of bookmobile services became a budget problem to a cost-conscious library community.” This trend has continued into the 21st century as erratic fuel prices, coupled with advances in digital technology, have expedited the bookmobile’s demise. According to the ALA, between 1990 and 2003 the number of bookmobiles in the United States has continued to decrease from 1,102 to 864.
BOOKMOBILE PATRONS
One of the most basic models of the communication process begins with a simple triangle: sender, message, and receiver. Someone has to talk, something has to be said, and someone has to listen. If we map that model onto an overview of information handling, we could say information has to be generated, it has to be transmitted, and it has to be understood. The information age functions on the implicit assumption that information transmission problems are purely technical. After all, optical fiber and satellite delivery systems distribute information across the globe. However, for those without the economic means to afford technological advancements, barriers to access may be cultural, psychological, or physical. For a variety of reasons, bookmobile patrons are non-users of traditional libraries (or even the internet). Therefore, identifying various obstacles to patron access is critical, because outreach to these non-users is the singular mission of bookmobiles.
Physical
The elderly and handicapped have been a bookmobile target market since the 1960s. It was in 1960 that the practice began of constructing bookmobiles with wheelchair lifts to serve this community as effortlessly as possible. Even today, the mentally or physically challenged comprise a significant segment of the bookmobile user population. According to Jan Meadows’ 2000 survey of bookmobiles in rural areas, “seniors, school children, and teachers are by far the largest segment of the population served. However, 40 percent of the respondents serve the mentally or physically challenged, and 31 percent serve the home bound.”
Psychological
The anxiety from the overwhelming size of a traditional library is often alleviated by the bookmobile. Non-users of traditional libraries enjoy the “personal service” aspect of the bookmobile. Some people find it easier to use a small collection than a large library. As Brown states, “a bookmobile does not overawe or confuse them by sheer numbers of books.” In addition, the limited nature of the bookmobile actually creates an aura of excitement analogous to the ice cream truck. Owing to the maxim that we appreciate more that which we have less often and take for granted that which we have all the time, bookmobiles attract patrons through their innovation and design. Anne Valente, a former reference librarian, echoed this sentiment when she recounted her experience of the bookmobile:
In the summertime, when the Craig Elementary School library [in St. Louis] was closed for the season, we drove to the local county branch where my sister and I would often check out 10 books at a time – the maximum limit our library cards would hold. Though I loved the county branch, with its immense card catalog and its bean bag chairs in the children’s section, I loved the bookmobile even more. The bookmobile regularly stationed itself in the Craig School parking lot, just two blocks from my home, and we often walked there on summer mornings before heading to the pool in the afternoons. The trailer’s musty smell and its endless rows of book spines comforted me, and the satisfying stamp of ink within each book’s back cover meant it was mine for at least two weeks.
Cultural
One of the goals of the bookmobile right from its very inception has been to bridge
cultural barriers. When bookmobiles were mainly for rural patrons, there was also a class barrier that was being breached as well, since the culture of the city was very different from that of the country. As Brown claims, the small scale of bookmobile collections can entice fearful readers - there is no austerity or speaking in hushed voices. As a corollary, the bookmobile breeds a culture where informality prevails. Rural patrons who might hesitate to go into a large, urban branch and ask for a book frequent the bookmobile with little coaxing.
On the other hand, in urban areas, the barrier is not so much distance or isolation as it is time itself. To punctuate this point, Peter Andros constructed a “lunch-hour outreach” to white-collar office workers on Wall Street. While most companies declined to allow their employees to participate for fear of lost productivity, a Dow Jones office of 1,000 employees agreed to the service with favorable results. Not only does this anecdote illustrate the importance of outreach in the most sophisticated of urban setting, it also serves as a powerful critique of post-industrialist theorists who herald the mobility and freedom of the white-collar worker. Even in the information-processing workplace, there are barriers to be breached.
BOOKMOBILE HOLDINGS
The bookmobile provides the interface between librarian and patron, but it is the content carried by the bookmobile that provides the reason for this meeting in the first place. By examining issues surrounding bookmobile holdings, one can explore the motivations behind both the librarians and the patrons, and perhaps decide whether the stated goals of the bookmobile are truly served by the information that is delivered. As with all collection development, deciding on what types of information to collect is dependent upon assessing the general character and needs of the community. The categories of information to be collected include:
• Historical Development of the community
• Geographical and Transportation Information on growth patterns and
population distribution
• Political and Legal Factions
• Demographic Data (e.g., age characteristics, size, race, and
transience of the population)
• Economic Data
• Social, Cultural, Educational and Recreational Organizations
Bookmobiles are a special case of the public library, though, because of their limited collection capacity and their selective targeting of a small audience. The technical name for bookmobile service is “portable materials distribution” and, according to the most recent study, 50 percent of today’s bookmobiles carry less than 2,500 materials. The limited space of the bookmobile cannot be neglected as a major contributor to collection development. Obviously, different types of books (paperbacks, reference works) tend to take up different amounts of shelf space. For example, while one may be able to efficiently shelve up to 20 juvenile books in a single linear foot of shelf space, only 5 law or medicine books can be shelved in the same space.
But ideally, content decisions should be made on more thoughtful criteria than
shelf space. On the one hand, bookmobile content may often be a reflection of the holdings of the main library. On the other hand, as Brown points out, each bookmobile route may have its own objectives, and each collection should support these objectives. For example, a route which was focused on providing “temporary” service to both children and adults in anticipation of a future branch site might stock: attractive and popular general books for adults (to entice this group and win support for future branch), only the best children’s books (because space is at a premium and school libraries have other books), and no reference materials (due to space restrictions).
One debate worth noting in more detail is the question of the “reading level” that the bookmobile should serve. Since the bookmobile functions as an outreach service to the information poor, the materials circulated will inevitably point to the types of information that these non-traditional libraries users seek. When bookmobiles came along, the fiction question wasn’t even an issue. The librarian of the first bookmobile noted that the demand for “best sellers” was virtually nonexistent, because her patrons were so rural that they did not receive news of such mass market movements. But bookmobiles soon gained a reputation for being vehicles full of “light” reading. According to Vavrek’s study, 65 percent of bookmobile titles are adult fiction. In addition, when asked what they were checking out, 60 percent of bookmobile patrons were checking out leisure reading while only 30 percent were checking out a general knowledge book. As a result, a non-traditional library user’s reading level may make him/her wary of bookmobile service that provides mostly popular fiction.
FUTURE OF THE BOOKMOBILE PROGRAM IN AMERICA
As was stated earlier, the long history of the bookmobile began to slowly decline in the mid-1970s. The fuel shortage of the early 1970s, combined with spates of government money that allowed the opening of new branch libraries in suburbs and outlying areas, did diminish enthusiasm for the bookmobile, but now, in the 21st century, its use appears to be growing once again. Without question, the “information superhighway” of the 1990s which promised on-demand access to information for anyone who has a phone or cable TV line in their home aided in the demise of the bookmobile. But rather than see this as a threat, some bookmobile programs have attempted to embrace the very technologies that threaten them. After all, integrating cutting edge technology with the concept of a mobile library was an essential element in launching the bookmobile. Simply replacing the horse and wagon of the earliest form of the bookmobile itself is an example of a cutting-edge technology — the automobile — used in a novel way. Furthermore, in the late 1960s, the debate was over what kinds of automated check-out systems would be feasible in a mobile library. The same issues arose then as now: there were questions about the availability of adequate and stable power, there was hesitance at the initial cost of automation, and there was a fear that the equipment would detract from the personalized service so prized by bookmobile librarians and patrons.
Succinctly stated, improvements in technologies that enable mobile online access have turned bookmobiles into mobiles computer labs. The online bookmobile represents a new era of library service no longer limited to computer access by geography. When a 1998 survey in Pennsylvania revealed that many people did not have internet access at home, the author hinted that bookmobiles may be a useful tool in bridging the technology gap. A few years later, the Memphis/Shelby County Public Library developed a completely adaptable, 40-foot-long, computerized “InfoBUS” to bring library services to non-English-speakers in Memphis and Shelby County.
As a fully operational mobile unit focused on computer services, this adapted bookmobile provides training on Windows, Internet use and safety, word processing and other programs, and access to valuable online databases. Moreover, the staff can make specific programs, like computer training, the focus of a particular day’s schedule if needs demand it. InfoBUS meets its goal of serving families who do not have access to a computer or the internet in a number of ways. At any given time, the mobile unit’s collection and programming can include information on becoming an American citizen, ESL materials, foreign-language materials, life skills information, and homework help.
The Digital Divide
Analogous to the information divide between the rural and urban populations at the turn of the 19th century, the digital divide is a growing gap in the 21st century. According to the World Economic Forum’s Annual Report of the Global Digital Divide Initiative, “there remains the stark disparity between two types of world citizens: one empowered by access to information and communication technologies (ICT) to improve their own livelihood; the other stunted and disenfranchised by the lack of access to ICT that provide critical development opportunities." As a global tool, the Digital Opportunity Index (DOI) is a composite index that measures "digital opportunity" or the possibility for citizens of a particular country to benefit from access to information that is "universal, ubiquitous, equitable and affordable." The index analyzes each country within the context of three distinct categories: utilization, infrastructure, and opportunity. In an effort to quantify and address the growing digital divide globally, the index generates and updates a map showing where the most disparity exists, such as the continent of Africa and the country of India. Although the index is intended for global monitoring of the digital divide, the framework could be utilized in mapping out digital disparity nationally. The United States of America is identified as a country with a great amount of digital opportunity. As was stated earlier, outreach is a library public service program initiated and designed to meet the information needs of an unserved or inadequately served target group. Increasingly, the information needs of unserved populations are manifesting themselves in access to information and communication technologies. Thus, bookmobiles wishing to fulfill the outreach goals of public libraries must begin to adapt, rather than fold to emerging technological advancement. Unfortunately, according to Meadows’ 2000 survey, many bookmobiles are still working without the benefit of being online. Only 17 of the 121 services are online, with four more in the process. Nineteen services have laptops that are downloaded with current borrower information each morning, and the information is uploaded into the main system again each evening.
Libraries face a world of new and changing demographics and patron needs. It is imperative that they recognizes the necessity of embracing emerging technologies and incorporate innovative methods to address the diverse needs of library patrons. Implementation of new and best practices and creative strategies is encouraged to address the ever changing needs of the library's patrons. Bookmobiles are an often overlooked but nevertheless critical aspect of outreach service in the 21st century. They exist in both urban and rural areas, but it is in the digitally disadvantaged communities where bookmobiles can make the most difference in terms of addressing access and equity of IT service in the future.
CONCLUSION
From its inception, the bookmobile has targeted the “information poor” through overcoming cultural, physical, and psychological barriers to access and developing collections around the needs and reading desires of its patrons. Some of the challenges that face bookmobiles in the 21st century have been around since the early 20th century. Many were identified by Brown in her classic work on bookmobiles in the 1960s: materials are limited because of space constraints, time for people to use the bookmobile is limited at each stop, fluctuating fuel costs must be accounted for in budgets, and quantity of juvenile materials often discourages adults. However, in 2009, the bookmobile continues in its outreach role as a pivotal resource for providing information to inadequately served populations.
Transitioning into this new century, described as the information age, we largely function on incorrect assumptions. It is assumed that masses of information are being generated. Certainly, one cannot deny that IT has allowed the generation of knowledge to expand at an increasing rate. However, implicit in that assumption is that such information is being distributed equally at an accelerated rate. Unfortunately, too much information is unavailable, even to the information rich let alone the information poor. Conversations within the IT community center around increasing bandwidth as a solution to information flow without considering whether segments of the population even own a computer; how information channels open and close within varying cultural and physical differences; or how economically information moves from one place to another and why it often cannot move at all. If we are going to take advantage of developments in information access, it is imperative that research continue in measuring the growing digital divide and the ability and resources available to close the gap. One resource that should not be overlooked is the bookmobile.
Whether we are truly in the information age or not, technological developments have lessened the isolation of certain populations (information rich) while increasing the isolation of other populations (information poor). Bookmobiles continue to play a role in bridging these communities by discovering new audiences for library services, providing technological opportunities to these populations, and retaining the person-to-person relationship with the patron. As we move into an age that is more and more virtual, with more and more information to sort our way through, I can only believe that the kinds of services offered by bookmobiles will become more and more important themselves, no matter what form they take.
Bibliography
Kumar, K. Prophecy and Progress: The Sociology of Industrial and Post-Industrial Society, New York: Penguin, 1978: 185-240.
Fuller, Wayne E. The American Mail: Enlarger of the Common Life. Chicago: University of Chicago Press, 1972: 109-47.
McMullen, Haynes. American Libraries before 1876. Beta Phi Mu Monograph Series. No. 6. Westport, CT: Greenwood Press, 2000: 125.
Reitz, Joan M. (ed.) Dictionary for Library and Information Science. Westport, CT: Libraries Unlimited, 2004.
Levinson, Nancy Smiler. “Takin’ It To The Streets: The History of The Book Wagon.” Library Journal, May 116.8 (1991): 43.
Dickson, Paul. The Library in America: A Celebration in Words and Pictures. New York: Facts on File Publications, 1986: 98.
VanBrimmer, Barb. “History of Mobile Services.” In Catherine Suyak Alloway (ed). The Book Stops Here: New Directions in Bookmobile Service. New York: Scarecrow Press, 1990: 35-52.
Brown. Eleanor Frances. Bookmobiles and Bookmobile Service. New York: Scarecrow Press 1967: 68.
Williams, Patrick. The American Public Library and the Problem of Purpose. Westport, CT: Greenwood Press, 1988: 24.
Alloway, Catherina Suyak (ed). The Book Stops Here: New Directions in Bookmobile Service. New York: Scarecrow Press, 1990: 16-18.
“Bookmobiles in the United States.” American Library Association 2 April 2007
http://www.ala.org/ala/ors/statsaboutlib/bookmobiles/bookmobiles.htm
Meadows, J. “United States Rural Bookmobile Service in the Year 2000.” Bookmobile and Outreach Services 4.1 (2001): 48.
Valente, Anne. Personal interview. 22 Mar. 2007.
Andros, Peter J. “Bullish on the Bookmobile: the story of public service to
Dow Jones & Company, Inc.” Wilson Library Bulletin May 67.9 (1993): 50.
“Community Needs Assessment.” Collection Development Training for Arizona Public Libraries. 4 April 2007 http://www.lib.az.us/cdt/commneeds.htm
Vavrek, Bernard. “Rural Road Warriors.” Library Journal March 115.5 (1990): 56-57.
Vavrek, Bernard “Asking the clients: results of a national bookmobile
Survey.” Wilson Library Bulletin May 66.9 (1992): 35.
Logsdon, Lori. “Bookmobile online circulation via cellular telephone.” Computers in Libraries April 10.4 (1990) : 17-18.
Kennedy, L. “Pennsylvania Bookmobile Survey.” Rural Libraries 18.1 (1998): 23-30
King, B. & Shanks, T. “This is Not Your Father's Bookmobile.” Library Journal 125.10 (2003): 14-17.
World Economic Forum. “Annual Report of the Global Digital Divide Initiative” Geneva: World Economic Forum, 2002: 3
“Digital Opportunity Index.” International Telecommunications Union. 15 April 2007. http://www.itu.int/osg/spu/statistics/DOI/index.phtml
“FY2007 Library Services and Technology Act Grant Offerings.” Illinois State Library. 14 April 2007.
http://www.cyberdriveillinois.com/departments/library/what_we_do/lsta2007.html
Roberts, Alasdair. Blacked Out: Government Secrets in the Information Age. New York: Cambridge University Press, 2006.
Wednesday, March 4, 2009
Trolley-ology
In philosophy club on Monday we discussed the trolley problem and what course of action you would pursue and why. For those of you unfamiliar with this famous problem of applied ethics, there are two ways to frame the question:
Trolley A
Trolley B
In philosophy club, we discussed Trolley B. The majority of you said that you would push the fat man, without question. Some of you, however, had more difficulty in coming up with an answer. One reason why it is hard to find a way out of this ethical dilemma could be the framing of the question itself, as Jerome pointed out. Would you make the same decision in both scenarios? Rationally, both scenarios involved killing 1 or 5 people. Yet, the idea of pushing someone to their death and pulling a lever to cause a death seem intuitively different.
In an interesting article in this month's issue of Prospect Magazine, David Edmonds and Nigel Warburton discuss the new trend of "x-phi" or experimental philosophy's approach to the Trolley Problem. When I was an undergraduate this program was developing at Washington University under the moniker of the Philosophy and Neuroscience and Psychology Program.
One assumption x-philosophers are challenging is the idea that intuitions are consistent across the board:
Why?
The critics of experimental philosophy are many. Critics question the localizing of thought through MRI, the crudeness of the the technology, and even the entire idea of experimental philosophy. Peter Singer, a strong critic of x-phi thinks reason should supersede our uneasiness at pushing the fat man onto the tracks.
In our discussion Rob touched upon a critique that hits both sides of the x-phi argument concerning the use of hypotheticals: they are so far-fetched that they don't replicate the true experience of making the decision.
Real world trolley experiences are different from those experienced while sitting in an MRI machine being asked whether you would push the fat man or a lever. The experimental philosophers fall prey to skewed data.
By isolating ethical decisions from context, armchair philosophers (like Singer) ignore the emotional context of ethical dilemmas and assume that reason should supersede.
Both camps want a black and white answer when the question is gray. Can we derive an "ought" out of this dilemma?
~guybrarian
Trolley A
You are standing by a railway line when you see a train hurtling towards you, out of control; the brakes have failed. In its path are five people tied to the tracks. Fortunately, the runaway train is approaching a junction with a side spur. If you flip a switch you can redirect the train onto this spur, saving five lives. That’s the good news. The not-quite-so-good news is that another person is tied down on the side spur of the track. Still, the decision’s easy, right? By altering the train’s direction only one life will be lost rather than five.
Trolley B
This time you’re on a footbridge overlooking the railway track. You see the train hurtling towards you and five people tied to the rails. Can they be saved? Again, the moral philosopher has arranged it so they can. There’s an obese man leaning over the footbridge. If you were to push him he would tumble over and squelch onto the track. He’s so fat that his bulk would bring the train—Trolley B—to a juddering halt. Sadly, the process would kill the fat man. But it would save the other five people. Should you shove him over?
In philosophy club, we discussed Trolley B. The majority of you said that you would push the fat man, without question. Some of you, however, had more difficulty in coming up with an answer. One reason why it is hard to find a way out of this ethical dilemma could be the framing of the question itself, as Jerome pointed out. Would you make the same decision in both scenarios? Rationally, both scenarios involved killing 1 or 5 people. Yet, the idea of pushing someone to their death and pulling a lever to cause a death seem intuitively different.
In an interesting article in this month's issue of Prospect Magazine, David Edmonds and Nigel Warburton discuss the new trend of "x-phi" or experimental philosophy's approach to the Trolley Problem. When I was an undergraduate this program was developing at Washington University under the moniker of the Philosophy and Neuroscience and Psychology Program.
One assumption x-philosophers are challenging is the idea that intuitions are consistent across the board:
The BBC conducted an online poll in which 65,000 people took part. Nearly four out of five agreed that Trolley A should be diverted. Only one in four thought that the fat man should be shoved over the footbridge. (Nobody has yet looked for a link with the fact that nearly one in four Britons are obese.)
Why?

Brain scans allegedly indicate that when people are confronted with Trolley A, the part of the brain linked to cognition and reasoning lights up; whereas with Trolley B, people seem to use a section linked to emotion. The few people who are prepared to use the fat man as a buffer take longer to respond than those aren’t, perhaps because they experience the emotional impulse and then reason their way out of it. Other experiments suggest people who have sustained damage to the prefrontal cortex, which is thought to generate various emotions, are far more likely than the rest of us to favour sacrificing the fat man.
The critics of experimental philosophy are many. Critics question the localizing of thought through MRI, the crudeness of the the technology, and even the entire idea of experimental philosophy. Peter Singer, a strong critic of x-phi thinks reason should supersede our uneasiness at pushing the fat man onto the tracks.
In our discussion Rob touched upon a critique that hits both sides of the x-phi argument concerning the use of hypotheticals: they are so far-fetched that they don't replicate the true experience of making the decision.

By isolating ethical decisions from context, armchair philosophers (like Singer) ignore the emotional context of ethical dilemmas and assume that reason should supersede.
Both camps want a black and white answer when the question is gray. Can we derive an "ought" out of this dilemma?
~guybrarian
Monday, March 2, 2009
A New Defense of the Problem of Evil
by Hanno
There are two issues in the problem of evil. First, why does evil exist? Second, why does God not do anything about evil? I have no answer to the first, but I can answer the second in a way I am sure has never been used. A solution lies in David Lewis's conception of possible worlds (all of the ersatz versions of Lewis's views will not work, offering perhaps another reason for some to accept Lewis' views.)
Lewis holds that claims like "There are apples," "There are numbers," "There are ways the world might have been," all make existential claims: these things exist. He calls the ways the world might have been "possible worlds" and asserts that they exist exactly like this one. These are not collections of propositions, or ideas in the mind of God, etc., etc.. Each possible world is a universe unto itself, some with people much like you and I, some without. Whenever there is a true sentence like "There might have been no people", it is true because there is a universe (space-time continuum) like this one at which there are no people (it does get more technical, see Lewis Counterfactuals and On the Plurality of Possible Worlds). So, too, for any true claim about what might have been.
Now the idea behind wondering why God does not do anything about evil or suffering is plain enough. If He is all good, he has a moral obligation to end evil and suffering. If he is all powerful, He has the power to end all evil and suffering, and if he is omniscient, he will know how to end all suffering and evil. If he did intervene, he would make this world a better place, reduce suffering and evil.
If Lewis is right, this world is one among an infinite number of worlds. Let us call some evil E, and our world e. The sentence "God could end E" is true at e. That means at some other world, p, God does end E. But here is the deal: for the totality of evil and suffering across possible worlds, there is no change at all. And if he acts here, but might not have, then there is someplace else where he does not act, and hence the evil we avoid here exists on another possible world. Whether God acts or does not act, the total amout of evil and suffering remains the same. Hence, it is not wrong for God to prevent E.
Now if there is no possible world in which E does not exist, then it is impossible for there not to be E. We cannot hold God responsible for not doing the impossible.
Given that He makes a world, if he only makes one, then everything at that possible world is necessary. It is the only possible world. It is then impossible for God to have done otherwise, too. God could not have made another possible world, because the existence of possible worlds is what makes anything possible.
The world exists. It contains evil. Possibilities exist. They can be more or less evil. But it is impossible for the totality of evil to be greater than or less than it is, across possible worlds.
And don't come back with "But that is unbelievable." We are already assuming God's existence. Is that so much easier to believe than Lewis' possible worlds?
There are two issues in the problem of evil. First, why does evil exist? Second, why does God not do anything about evil? I have no answer to the first, but I can answer the second in a way I am sure has never been used. A solution lies in David Lewis's conception of possible worlds (all of the ersatz versions of Lewis's views will not work, offering perhaps another reason for some to accept Lewis' views.)
Lewis holds that claims like "There are apples," "There are numbers," "There are ways the world might have been," all make existential claims: these things exist. He calls the ways the world might have been "possible worlds" and asserts that they exist exactly like this one. These are not collections of propositions, or ideas in the mind of God, etc., etc.. Each possible world is a universe unto itself, some with people much like you and I, some without. Whenever there is a true sentence like "There might have been no people", it is true because there is a universe (space-time continuum) like this one at which there are no people (it does get more technical, see Lewis Counterfactuals and On the Plurality of Possible Worlds). So, too, for any true claim about what might have been.
Now the idea behind wondering why God does not do anything about evil or suffering is plain enough. If He is all good, he has a moral obligation to end evil and suffering. If he is all powerful, He has the power to end all evil and suffering, and if he is omniscient, he will know how to end all suffering and evil. If he did intervene, he would make this world a better place, reduce suffering and evil.
If Lewis is right, this world is one among an infinite number of worlds. Let us call some evil E, and our world e. The sentence "God could end E" is true at e. That means at some other world, p, God does end E. But here is the deal: for the totality of evil and suffering across possible worlds, there is no change at all. And if he acts here, but might not have, then there is someplace else where he does not act, and hence the evil we avoid here exists on another possible world. Whether God acts or does not act, the total amout of evil and suffering remains the same. Hence, it is not wrong for God to prevent E.
Now if there is no possible world in which E does not exist, then it is impossible for there not to be E. We cannot hold God responsible for not doing the impossible.
Given that He makes a world, if he only makes one, then everything at that possible world is necessary. It is the only possible world. It is then impossible for God to have done otherwise, too. God could not have made another possible world, because the existence of possible worlds is what makes anything possible.
The world exists. It contains evil. Possibilities exist. They can be more or less evil. But it is impossible for the totality of evil to be greater than or less than it is, across possible worlds.
And don't come back with "But that is unbelievable." We are already assuming God's existence. Is that so much easier to believe than Lewis' possible worlds?
Thursday, February 26, 2009
Plantinga vs. Dennett

So, you are sitting in philosophy club on a random Monday afternoon listening to the philosophy faculty argue back and forth at a relenting speed and think to yourself, "Is this really what philosophers do?"
Yes.
Last week, February 21st to be exact, at the Central Division Meeting of the American Philosophical Association in Chicago Alvin Plantinga gave a paper and found himself in an intellectual joust with Daniel Dennett.
Succinctly, Alvin Plantinga gave a paper arguing that Christianity is compatible with evolution, and Daniel Dennett responded...with gusto.
A rather biased (but humorous) account of the entire debate can be found here. The author is clearly a fan of Plantinga concluding,
In my estimation, Plantinga won hands down because Dennett savagely mocked Plantinga rather than taking him seriously. Plantinga focused on the argument, and Dennett engaged in ridicule. It is safe to say that Dennett only made himself look bad along with those few nasty naturalists that were snickering at Plantinga.If I am not mistaken, Plantinga once gave a talk at McNeese State University with the sponsorship of the philosophy club.
Monday, February 16, 2009
Blameworthiness
An Examination of Richard Parker’s Principle of Blameworthiness
By Todd Furman
Introduction:
In “Blame, Punishment and the Role of Result,” Richard Parker argues that the actual results of an actor’s conduct should not factor into the assessment of the blameworthiness of the actor –call this idea Parker’s Principle of Blameworthiness, (PPB). In this essay, I shall explain PPB and identify the moral intuition that justifies PPB. I will then explore the consequences of accepting the moral intuition behind PPB. These consequences just might prove to be a reductio of PPB.
The Case for PPB:
Parker begins his case for PPB by considering the following sort of scenario –call it the case of the drunken marksmen. Suppose two drunk party goers, A and B, decide to determine who the better marksman is by firing rifles out of a window at a lamppost across the street. And suppose that the contest ends in a draw inasmuch as both A and B are unable to hit the lamppost. But also suppose, unbeknownst to A and B, a bullet fired by A has ricocheted and killed an innocent bystander.
Parker then asks whether B should feel any less blameworthy than A? The answer, according to Parker, is no. And this seems right. Moreover, this judgment can be reinforced by considering a variation of the above case.
Suppose that a considerable amount of time passes before ballistic tests are able to determine whose gun fired the fatal bullet. In the meantime, should A or B believe that she is possibly less blameworthy than the other pending the results? Again, Parker’s answer –which seems intuitively correct—is no. And finally, Parker’s intuition can be driven all of the way home by supposing that the ballistics test is inconclusive such that the actual killer’s identity remains forever unknown. “If we never discover whose bullet did the fatal damage, is there some hesitancy, some cloudiness of our intuition with repect to how much blame the two … deserve? I think not.”
Parker summarizes the moral of his story as follows.
This is not to say, however, that Parker does not believe that intentions and the agent’s knowledge of the situation are not relevant to determining blameworthiness. Parker writes:
And Parker emphasizes his point that intentions and knowledge are relevant to assessing blameworthiness with the following thought experiment –call it the case of the stadium shooters.
Imagine that A takes a rifle to a place overlooking a stadium where he knows an event is underway and recklessly fires the weapon in the direction of the grandstands. Let us suppose the fortunate consequence of the bullet’s striking the bleachers harmlessly after narrowly missing members of the crowd. Compare this with the situation of B, who takes his rifle to the same spot on a day when he knows there is no event scheduled and in fact believes the grandstands to be entirely empty. He too fires toward the seats but with unfortunate results: a lone custodian is present policing the stands and he is struck and killed by B’s bullet. It takes either a considerable stretch of the imagination or adherence to a bad theory, or both, to want to hold B more blameworthy than A. Truly, the harm caused by B’s conduct outweighs that caused by A’s, the latter being negligible. But it is A, on the view I am defending, who is more blameworthy and whose desert is the greater punishment.
Hence, it becomes clear that Parker wants to substitute the risk of harm versus actual harm as the device for determining an actor’s level of blameworthiness and punishment. Moreover, all of Parker’s subsequent judgments seem to be intuitively right. And the import of PPB would be massive if it were incorporated into our current system of jurisprudence. To name just one of many obvious examples, if Parker is right, there should be no difference in the way in which society handles (e.g. punishes) murderers and those that have attempted murder but failed.
The Moral Intuition Behind PPB:
But what, exactly, is the moral intuition behind PPB? Rhetorically, Parker asks “on what rational grounds can we proportion punishment to the results of an actor’s conduct when those results are largely or entirely beyond the actor’s control?” And the implicit answer is that there are no rational grounds to do so given the role that luck plays in determining the actual results. That is, luck –be it good luck or bad luck—should play no role in the determination of an actor’s level of blameworthiness –call this moral intuition Parker’s moral intuition, (PMI). And since luck plays a large role in determining the exact consequences of any actor’s conduct, those consequences must be excluded from the calculations of an actor’s level of blame.
The Implications of PMI:
As I indicated above, adopting PPB, or more precisely, the moral principle upon which it stands, PMI, would radically makeover our system of jurisprudence. Presently, I would like to explore some further consequences of accepting PMI. These will be consequences far more radical than those already identified. In fact, these consequences may serve as good reasons for rejecting PMI.
Parker believes that blameworthiness is a function of an actor’s overt action, state of mind (including intentions and knowledge, etc.) and circumstances, but not the actual consequences of her overt act, since the actual consequences are a function of luck. But what Parker fails to realize is that the actual execution or non-execution of an overt action is also a function of luck. Hence, according to PMI, the actual overt action is not relevant to calculating an actor’s blameworthiness.
Consider the following case –call it the case of the campus shooters. Suppose that identical twins A and B have a grudge against the university that they attend. As such, they plan vengeance by climbing twin towers and simultaneously opening fire on all of the faculty, staff, and students they possibly can. Suppose that at the appointed time A and B open fire. However, on pulling the trigger for the first time A’s gun jams so that she is unable to fire even a single shot. B, on the other hand, is able to fire hundreds of rounds, wounding and killing several before authorities capture her and her sister.
On the face of it, A and B have committed different overt acts –B actually fired a gun at innocent people killing and wounding several people, while A merely tried to fire a gun at innocent people. Hence, according to PMI, an actor’s overt act should not play a role in determining blameworthiness, since whether or not an actor is able to execute an intended act is a function of luck, as the case clearly illustrates. Hence, following Parker, A and B should be ascribed the same level of blameworthiness. And this judgment seems right on target here –no pun intended.
But let me push harder on PMI and see what happens. Consider the case of the campus shooters again but with the following change. Suppose that A is never able to attempt to fire a shot since A is unable to gain access to her tower’s roof top; her bolt cutters –identical to those of B—break on the lock securing the roof access –a lock identical to the one on B’s tower. In this case, whether the agent is able to commit a given overt act is clearly a function luck. Hence, according to PMI, the actual overt acts should not play a role in determining A and B’s blameworthiness. Inasmuch as this is the case, there should be no difference in the way in which society handles (e.g. punishes) A and B in the above.
And even this conclusion –although it runs counter to actual practice—might seem agreeable, since the supposition is that in all of the close possible worlds in which A’s bolt cutters did not fail her, she proceeded as planned and rained down carnage similar to B’s. But the route to a reductio should be coming into focus now.
Suppose the case of the campus shooters again but with the following changes. Suppose that A doesn’t even make it to campus to begin the rampage. She is pulled over for a faulty tail-light and taken into custody based on an outstanding warrant that was issued solely as a result of a clerical error.
In this case, whether the agent is able to commit a given overt act is clearly a function of luck. Hence, according to PMI, the actual overt acts should not play a role in determining A and B’s blameworthiness. Inasmuch as this is the case, there should be no difference in the way in which society handles (e.g. punishes) A and B in the above.
But think about what is being said now. A, being as blameworthy as B, is to be punished as severely as B even though A caused no harm; even though A wasn’t even able to attempt to cause harm. It seems then that A and B’s blameworthiness is reduced to a function of their intent. And this result begins to strain believability.
It would seem that PMI taken to its extreme would justify some sort of thought police. If an actor intends some immoral/criminal offense, then there should be no difference in the way in which society handles (e.g. punishes) said actor from the way in which society handles an actor that actually attempts and/or executes the offense.
I am inclined to think that one’s moral character –one’s overall blameworthiness—is reducible to her desires, whether they are ever acted on or not. Hence, I believe that PMI is more or less right. However, I can see no practical means by which this insight –one’s moral character is reducible to her desires—could be put into practice (by mere mortals). Moreover, any attempt to institute any public policies based on this insight is bound to be soundly rejected by the body politic.
The All Pervasiveness of Luck:
To be completely fair, Parker claimed that blameworthiness was a function of the combination of overt action, state of mind (including intention and knowledge) and circumstances. And I have only shown that overt actions should be excluded from the matrix of calculating blameworthiness given PMI. And from this I reduced blameworthiness to a function of intention or desires. That is, I have neglected the roles that knowledge and circumstances play in calculating blameworthiness.
I believe, however, it does not take much imagination to concoct cases in which an actor’s knowledge and circumstances are clearly a function of luck. In this case, given PMI, they should be excluded from the matrix of calculating an actor’s blameworthiness. In the end, then, blameworthiness would reduce to a function of intent or desires.
The problem is, however, with a bit more imagination I believe that one can construct a case in which an actor’s desires are clearly a function of luck. Give this fact, and PMI, an actor’s desires must be excluded from the matrix of calculating her blameworthiness. In itself, this result is no big deal. But the overall situation arrived at is a big deal. Namely, there is nothing left by which an actor’s blameworthiness may be calculated and this cannot be right.
Conclusion:
Given the unacceptable conclusion that I have reached using PMI –that there are no grounds by which an actor may be judged blameworthy. I believe PMI must be re-evaluated to determine whether it remains accurate as is. Until then Parker’s thesis remains dubious. My hunch is that this re-evaluation might profit from an analysis of luck.
Sources:
Feinberg, Joel and Gross, Hyman. (1991) Philosophy of Law (Fourth Edition). Wadsworth Publishing Company, Belmont, California.
Parker, Richard. (1984a) “Blame, Punishment, and the Role of Result,” in Philosophy of Law (Fourth Edition) edited by Joel Feinberg and Hyman Gross. © 1991 by Wadsworth Publishing Company, Belmont, California.
Parker, Richard. (1984b) “Blame, Punishment, and the Role of Result,” in American Philosophical Quarterly, Vol. 21, no. 3 (1984), pp. 1-11.
By Todd Furman
Introduction:
In “Blame, Punishment and the Role of Result,” Richard Parker argues that the actual results of an actor’s conduct should not factor into the assessment of the blameworthiness of the actor –call this idea Parker’s Principle of Blameworthiness, (PPB). In this essay, I shall explain PPB and identify the moral intuition that justifies PPB. I will then explore the consequences of accepting the moral intuition behind PPB. These consequences just might prove to be a reductio of PPB.
The Case for PPB:
Parker begins his case for PPB by considering the following sort of scenario –call it the case of the drunken marksmen. Suppose two drunk party goers, A and B, decide to determine who the better marksman is by firing rifles out of a window at a lamppost across the street. And suppose that the contest ends in a draw inasmuch as both A and B are unable to hit the lamppost. But also suppose, unbeknownst to A and B, a bullet fired by A has ricocheted and killed an innocent bystander.
Parker then asks whether B should feel any less blameworthy than A? The answer, according to Parker, is no. And this seems right. Moreover, this judgment can be reinforced by considering a variation of the above case.
Suppose that a considerable amount of time passes before ballistic tests are able to determine whose gun fired the fatal bullet. In the meantime, should A or B believe that she is possibly less blameworthy than the other pending the results? Again, Parker’s answer –which seems intuitively correct—is no. And finally, Parker’s intuition can be driven all of the way home by supposing that the ballistics test is inconclusive such that the actual killer’s identity remains forever unknown. “If we never discover whose bullet did the fatal damage, is there some hesitancy, some cloudiness of our intuition with repect to how much blame the two … deserve? I think not.”
Parker summarizes the moral of his story as follows.
The view I am urging here is that, properly speaking, only an actor’s conduct can be blameworthy. I do not believe that it makes good sense to blame a person for the consequences that in fact flow from his conduct even if they are within the risk of that conduct. , ,
This is not to say, however, that Parker does not believe that intentions and the agent’s knowledge of the situation are not relevant to determining blameworthiness. Parker writes:
,
The individual is blameworthy and punishable, on my view, only for the conduct itself, where conduct is construed as a combination of overt action, state of mind (including intention, knowledge, ect.) and circumstances.
And Parker emphasizes his point that intentions and knowledge are relevant to assessing blameworthiness with the following thought experiment –call it the case of the stadium shooters.
Imagine that A takes a rifle to a place overlooking a stadium where he knows an event is underway and recklessly fires the weapon in the direction of the grandstands. Let us suppose the fortunate consequence of the bullet’s striking the bleachers harmlessly after narrowly missing members of the crowd. Compare this with the situation of B, who takes his rifle to the same spot on a day when he knows there is no event scheduled and in fact believes the grandstands to be entirely empty. He too fires toward the seats but with unfortunate results: a lone custodian is present policing the stands and he is struck and killed by B’s bullet. It takes either a considerable stretch of the imagination or adherence to a bad theory, or both, to want to hold B more blameworthy than A. Truly, the harm caused by B’s conduct outweighs that caused by A’s, the latter being negligible. But it is A, on the view I am defending, who is more blameworthy and whose desert is the greater punishment.
Hence, it becomes clear that Parker wants to substitute the risk of harm versus actual harm as the device for determining an actor’s level of blameworthiness and punishment. Moreover, all of Parker’s subsequent judgments seem to be intuitively right. And the import of PPB would be massive if it were incorporated into our current system of jurisprudence. To name just one of many obvious examples, if Parker is right, there should be no difference in the way in which society handles (e.g. punishes) murderers and those that have attempted murder but failed.
The Moral Intuition Behind PPB:
But what, exactly, is the moral intuition behind PPB? Rhetorically, Parker asks “on what rational grounds can we proportion punishment to the results of an actor’s conduct when those results are largely or entirely beyond the actor’s control?” And the implicit answer is that there are no rational grounds to do so given the role that luck plays in determining the actual results. That is, luck –be it good luck or bad luck—should play no role in the determination of an actor’s level of blameworthiness –call this moral intuition Parker’s moral intuition, (PMI). And since luck plays a large role in determining the exact consequences of any actor’s conduct, those consequences must be excluded from the calculations of an actor’s level of blame.
The Implications of PMI:
As I indicated above, adopting PPB, or more precisely, the moral principle upon which it stands, PMI, would radically makeover our system of jurisprudence. Presently, I would like to explore some further consequences of accepting PMI. These will be consequences far more radical than those already identified. In fact, these consequences may serve as good reasons for rejecting PMI.
Parker believes that blameworthiness is a function of an actor’s overt action, state of mind (including intentions and knowledge, etc.) and circumstances, but not the actual consequences of her overt act, since the actual consequences are a function of luck. But what Parker fails to realize is that the actual execution or non-execution of an overt action is also a function of luck. Hence, according to PMI, the actual overt action is not relevant to calculating an actor’s blameworthiness.
Consider the following case –call it the case of the campus shooters. Suppose that identical twins A and B have a grudge against the university that they attend. As such, they plan vengeance by climbing twin towers and simultaneously opening fire on all of the faculty, staff, and students they possibly can. Suppose that at the appointed time A and B open fire. However, on pulling the trigger for the first time A’s gun jams so that she is unable to fire even a single shot. B, on the other hand, is able to fire hundreds of rounds, wounding and killing several before authorities capture her and her sister.
On the face of it, A and B have committed different overt acts –B actually fired a gun at innocent people killing and wounding several people, while A merely tried to fire a gun at innocent people. Hence, according to PMI, an actor’s overt act should not play a role in determining blameworthiness, since whether or not an actor is able to execute an intended act is a function of luck, as the case clearly illustrates. Hence, following Parker, A and B should be ascribed the same level of blameworthiness. And this judgment seems right on target here –no pun intended.
But let me push harder on PMI and see what happens. Consider the case of the campus shooters again but with the following change. Suppose that A is never able to attempt to fire a shot since A is unable to gain access to her tower’s roof top; her bolt cutters –identical to those of B—break on the lock securing the roof access –a lock identical to the one on B’s tower. In this case, whether the agent is able to commit a given overt act is clearly a function luck. Hence, according to PMI, the actual overt acts should not play a role in determining A and B’s blameworthiness. Inasmuch as this is the case, there should be no difference in the way in which society handles (e.g. punishes) A and B in the above.
And even this conclusion –although it runs counter to actual practice—might seem agreeable, since the supposition is that in all of the close possible worlds in which A’s bolt cutters did not fail her, she proceeded as planned and rained down carnage similar to B’s. But the route to a reductio should be coming into focus now.
Suppose the case of the campus shooters again but with the following changes. Suppose that A doesn’t even make it to campus to begin the rampage. She is pulled over for a faulty tail-light and taken into custody based on an outstanding warrant that was issued solely as a result of a clerical error.
In this case, whether the agent is able to commit a given overt act is clearly a function of luck. Hence, according to PMI, the actual overt acts should not play a role in determining A and B’s blameworthiness. Inasmuch as this is the case, there should be no difference in the way in which society handles (e.g. punishes) A and B in the above.
But think about what is being said now. A, being as blameworthy as B, is to be punished as severely as B even though A caused no harm; even though A wasn’t even able to attempt to cause harm. It seems then that A and B’s blameworthiness is reduced to a function of their intent. And this result begins to strain believability.
It would seem that PMI taken to its extreme would justify some sort of thought police. If an actor intends some immoral/criminal offense, then there should be no difference in the way in which society handles (e.g. punishes) said actor from the way in which society handles an actor that actually attempts and/or executes the offense.
I am inclined to think that one’s moral character –one’s overall blameworthiness—is reducible to her desires, whether they are ever acted on or not. Hence, I believe that PMI is more or less right. However, I can see no practical means by which this insight –one’s moral character is reducible to her desires—could be put into practice (by mere mortals). Moreover, any attempt to institute any public policies based on this insight is bound to be soundly rejected by the body politic.
The All Pervasiveness of Luck:
To be completely fair, Parker claimed that blameworthiness was a function of the combination of overt action, state of mind (including intention and knowledge) and circumstances. And I have only shown that overt actions should be excluded from the matrix of calculating blameworthiness given PMI. And from this I reduced blameworthiness to a function of intention or desires. That is, I have neglected the roles that knowledge and circumstances play in calculating blameworthiness.
I believe, however, it does not take much imagination to concoct cases in which an actor’s knowledge and circumstances are clearly a function of luck. In this case, given PMI, they should be excluded from the matrix of calculating an actor’s blameworthiness. In the end, then, blameworthiness would reduce to a function of intent or desires.
The problem is, however, with a bit more imagination I believe that one can construct a case in which an actor’s desires are clearly a function of luck. Give this fact, and PMI, an actor’s desires must be excluded from the matrix of calculating her blameworthiness. In itself, this result is no big deal. But the overall situation arrived at is a big deal. Namely, there is nothing left by which an actor’s blameworthiness may be calculated and this cannot be right.
Conclusion:
Given the unacceptable conclusion that I have reached using PMI –that there are no grounds by which an actor may be judged blameworthy. I believe PMI must be re-evaluated to determine whether it remains accurate as is. Until then Parker’s thesis remains dubious. My hunch is that this re-evaluation might profit from an analysis of luck.
Sources:
Feinberg, Joel and Gross, Hyman. (1991) Philosophy of Law (Fourth Edition). Wadsworth Publishing Company, Belmont, California.
Parker, Richard. (1984a) “Blame, Punishment, and the Role of Result,” in Philosophy of Law (Fourth Edition) edited by Joel Feinberg and Hyman Gross. © 1991 by Wadsworth Publishing Company, Belmont, California.
Parker, Richard. (1984b) “Blame, Punishment, and the Role of Result,” in American Philosophical Quarterly, Vol. 21, no. 3 (1984), pp. 1-11.
Wednesday, February 11, 2009
Filmosophy Series: The Big Lebowski
Dude-ist Philosophy: Laziness as a Personal Ethos [poster]
When: Friday, February 20th, 1:00 - 4:00 p.m.
Where: Hardtner Hall, room 128
Who: Joshua Finnell (guybrarian)
Why: "Because sometimes you eat the bar and sometimes the bar eats you."
When: Friday, February 20th, 1:00 - 4:00 p.m.
Where: Hardtner Hall, room 128
Who: Joshua Finnell (guybrarian)
Why: "Because sometimes you eat the bar and sometimes the bar eats you."
Monday, February 9, 2009
The One
by Hanno
The uniqueness of human beings is an issue at least as old as Aristotle and has at least two components: First, Aristotle places our unique status as the primary way of understanding both our purpose and our goodness. The Greeks thought that everything has a purpose, and that the purpose of anything had to be unique. Following that, if we think that we are not unique, we would start to think there is no purpose in our life, no function we are supposed to fulfill. And since the good knife is one that fulfills its function well, the good person is one fulfills our purpose well. If we have no purpose, what happens to our notion of living well?
What, then, is unique to humans? What is our purpose? Many people have thought of different answers, and for a long time, it always struck me as odd to even ask. Some people point to our thumbs as being unique, some to the creation of culture (non-biologically driven patterns of learned behavior), some to language, some to thought, some to reason. Each of these, 'cept the thumb, have been shown not to be unique, and a thumb is not much to hang your hat on. OK, you could hang an actual hat on a thumb, but not a metaphorical one.
The reason this always struck me as odd is simple: its rather obvious that we are unique, that even if a chimp can learn language and reason, we are not the same as a chimp. In other words, our definition in terms of these features is so inadequate that it seems silly to ask: what makes us human? And without the Aristotelian background, the importance of the question escaped me. So what if monkeys can speak, or reason, or have a culture, or if we discover some other species with a thumb? Why would that effect our conception of ourselves? Why would that threaten our conception of ourselves as unique? What rides on determining unique features of the human being?
Now when we discover that some feature that we thought was unique turns out to be shared (The bonobo is able to grasp language at a high level, chimps are able to reason, some chimps have a culture, some chimps use tools, etc., etc.) the obvious response is mere passing interest. "I thought that feature was unique... oh well, I guess it isnt." I have had that reaction myself and I see it in others. So where would existential angst come from?
Second (there has to be a second, there was a first... forgot? first paragraph), moral notions are limited (historically, if not philosophically), to people. This is not a Western idea. So, for example, the Comanche called themselves "The people" (Nermernuh). Everyone else is not. If you are of the people, you are protected by the people. There was, apparently, almost no violence within the tribe, or against anyone who culturally acted like a tribe member. However, anyone outside the tribe was not similarly protected. They may trade with you, or they may kill you, that choice is up to an individual Comanche, and simply not part of their ethical framework, not subject to judgment. Other people's moral status was like any other piece of nature, sometimes to be preserved, sometimes to be used and sometimes to be abused. It has been argued (I think correctly) that the whole 10 commandments were originally understood in the same way: "Thou shalt not kill" really meant "Thou shall not kill a fellow Jew." It, too, was a tribal notion.
The question then of what a human being is connects to our conception of morality: humans are beings to which we have a moral duty, while non-humans are not protected by moral codes. It is also easy to see how correctly defining the human in terms that shape our moral attitudes (reason, not thumbs) is one way of intuitively increasing beings with moral rights. "I know those things do not act like us, but they really are human, and hence we have moral duties towards them." "I know we do not seem to be human to you, but we have this uniquely human feature, too, so you should treat us as moral agents." Historically, when we have broadened our notion of the human, we bring more people into society, and start acting better. A good definition of a human, then, has been of great importance. It is then easy to see that if we are not so important, not so unique, nature gets raised by default. Many people who do not see humans as unique see us as part of nature, thus raising the moral status of nature. We call them "environmentalists."
So now we can see why much of the artificial intelligence science fiction asks whether or not computers that develop consciousness are moral agents. Early in Star Trek, The Next Generation, we see a trial to determine whether or not Data, a computer, is a moral agent, or not. Is he an officer in the Federation, or is he like any other computer, to be used by its owner as its owner sees fit?
I think our angst about thinking computers is not existential, but about control. It is the worry of Mickey Mouse in Fantasia, HAL in 2001, and Cyberdyne Systems Model 101 Terminator.
The uniqueness of human beings is an issue at least as old as Aristotle and has at least two components: First, Aristotle places our unique status as the primary way of understanding both our purpose and our goodness. The Greeks thought that everything has a purpose, and that the purpose of anything had to be unique. Following that, if we think that we are not unique, we would start to think there is no purpose in our life, no function we are supposed to fulfill. And since the good knife is one that fulfills its function well, the good person is one fulfills our purpose well. If we have no purpose, what happens to our notion of living well?
What, then, is unique to humans? What is our purpose? Many people have thought of different answers, and for a long time, it always struck me as odd to even ask. Some people point to our thumbs as being unique, some to the creation of culture (non-biologically driven patterns of learned behavior), some to language, some to thought, some to reason. Each of these, 'cept the thumb, have been shown not to be unique, and a thumb is not much to hang your hat on. OK, you could hang an actual hat on a thumb, but not a metaphorical one.
The reason this always struck me as odd is simple: its rather obvious that we are unique, that even if a chimp can learn language and reason, we are not the same as a chimp. In other words, our definition in terms of these features is so inadequate that it seems silly to ask: what makes us human? And without the Aristotelian background, the importance of the question escaped me. So what if monkeys can speak, or reason, or have a culture, or if we discover some other species with a thumb? Why would that effect our conception of ourselves? Why would that threaten our conception of ourselves as unique? What rides on determining unique features of the human being?
Now when we discover that some feature that we thought was unique turns out to be shared (The bonobo is able to grasp language at a high level, chimps are able to reason, some chimps have a culture, some chimps use tools, etc., etc.) the obvious response is mere passing interest. "I thought that feature was unique... oh well, I guess it isnt." I have had that reaction myself and I see it in others. So where would existential angst come from?
Second (there has to be a second, there was a first... forgot? first paragraph), moral notions are limited (historically, if not philosophically), to people. This is not a Western idea. So, for example, the Comanche called themselves "The people" (Nermernuh). Everyone else is not. If you are of the people, you are protected by the people. There was, apparently, almost no violence within the tribe, or against anyone who culturally acted like a tribe member. However, anyone outside the tribe was not similarly protected. They may trade with you, or they may kill you, that choice is up to an individual Comanche, and simply not part of their ethical framework, not subject to judgment. Other people's moral status was like any other piece of nature, sometimes to be preserved, sometimes to be used and sometimes to be abused. It has been argued (I think correctly) that the whole 10 commandments were originally understood in the same way: "Thou shalt not kill" really meant "Thou shall not kill a fellow Jew." It, too, was a tribal notion.
The question then of what a human being is connects to our conception of morality: humans are beings to which we have a moral duty, while non-humans are not protected by moral codes. It is also easy to see how correctly defining the human in terms that shape our moral attitudes (reason, not thumbs) is one way of intuitively increasing beings with moral rights. "I know those things do not act like us, but they really are human, and hence we have moral duties towards them." "I know we do not seem to be human to you, but we have this uniquely human feature, too, so you should treat us as moral agents." Historically, when we have broadened our notion of the human, we bring more people into society, and start acting better. A good definition of a human, then, has been of great importance. It is then easy to see that if we are not so important, not so unique, nature gets raised by default. Many people who do not see humans as unique see us as part of nature, thus raising the moral status of nature. We call them "environmentalists."
So now we can see why much of the artificial intelligence science fiction asks whether or not computers that develop consciousness are moral agents. Early in Star Trek, The Next Generation, we see a trial to determine whether or not Data, a computer, is a moral agent, or not. Is he an officer in the Federation, or is he like any other computer, to be used by its owner as its owner sees fit?
I think our angst about thinking computers is not existential, but about control. It is the worry of Mickey Mouse in Fantasia, HAL in 2001, and Cyberdyne Systems Model 101 Terminator.
Saturday, February 7, 2009
Sarah Connor: "Look... I am not stupid, you know. They cannot make things like that yet."

This weekend, I finally finished reading James Trefil's book entitled Are We Unique? This is a very accessible, introductory text for anyone interested the study of human consciousness. You can even find it in the Frazar library (BF 444 t74 1997). Trefil is the Robinson Professor of Physics at George Mason University and has appeared on NPR several times over the years. He also has a mustache like Don Mattingly.
As a scientist, Trefil chooses consciousness and intelligence as the defining characteristics that make human beings distinct from both animals and computers. A great majority of the book is dedicated to the idea of computers nearing anything close to consciousness or artificial intelligence. Needless to say, Searle and the Chinese Room example are written about extensively. Trefil ends up contextualizing his own theory of consciousness within a larger general theory of complexity, defining consciousness as an emergent property of neuronal complexity. Like a philosopher, he ultimately leaves the question of artificial intelligence open (but his tone is more than skeptical).
After I finished reading the book, I couldn't help but wonder why human beings are so threatened by the idea of artificial intelligence? What is so sacred about intelligence as opposed to any other ability that human beings have? We have built machines that can "run" faster (cars) and are stronger (forklifts) than human beings, yet we don't feel threatened by these machines. However, if Deep Blue beats Kasparov in a chess match we start to question our own uniqueness.
Is it evolution anxiety? Do human beings simply fear becoming a place holder between animals and intelligent machines? Supposing we could create a conscious machine, would human beings cease being unique? After all, Australian orchids are not conscious entities but are still rather unique.
Or is it just a control issue as opposed to an issue of uniqueness? James Cameron seems to think so. We build the machines and the machines eventually become more powerful and kill us (See The Terminator, Terminator 2: Judgment Day, Terminator 3: Rise of the Machines, Terminator Salvation forthcoming)
What do you think?
~guybrarian
Monday, February 2, 2009
Lecture on Buddhism
by MAB
Just as an FYI, I will be giving a lecture on Buddhism (history, belief, and practice) this Wednesday evening. New Ranch Gallery Room, 7-8 PM. Come one, come all.
Just as an FYI, I will be giving a lecture on Buddhism (history, belief, and practice) this Wednesday evening. New Ranch Gallery Room, 7-8 PM. Come one, come all.
Do Vampires have Rights?
By Hanno
It is quite common now to understand vampires and zombies in terms of some disease which takes over the body and re-animates it after death. Movies like "night of the Living Dead" and all of its offspring, including "28 days later," as well as books like "World War Z" use this theme. This is not the way vampires were always understood, however. The folklore of vampires is much older than Bram Stoker's Dracula, and the where widely regarded as real creatures in many parts of the world. But instead of disease, the vampire was either a person possessed by a malevolent spirit, or a ghost-like specter. It is only in the late 1800's that the germ theory of disease gains prominence, so the ground for changing our understanding of vampires was not set until then. (By the way, I love the irony of speaking scientifically about fictional entities, and using science to discover which of the myths surrounding vampires are factual, and which are purely mythical, something every vampire book has done since "I am Legend" first did it in 1951.)
For Matheson, there are three kinds of vampires. There are the newly diseased who will eventually die and turn into the undead variety. On the way to this disturbing end, many go mad, as they realize what they have become: flesh eating creatures that would eat their own loved ones if they could. Matheson explains the anti-social, hardly human variety of vampire in that way: they have gone mad. But it is possible not to go mad, or to come out of madness, and still not be undead. These are people simply with a disease that, if untreated, will kill them and turn them into the undead, and the disease will make them yearn, desire, require the blood of a living thing, preferably human, preferably undiseased human, to keep living. The bacteria at the root of vampirism needs blood to survive and prosper. At the death of the human, the corpse reanimates into a being properly called the vampire. The corpse does not breathe, its heart does not beat. This being seems quite rational, remembers events and people from its living days, plans ahead, and interacts socially with other creatures like herself. For example, knowing that Neville is all alone, the women vampires dance seductively, stripping, etc., in an effort to lure Neville out of his home so that they can eat him.
Now as we saw in the last post, Neville has turned himself into a scientist. he discovers many of the things I just described through experiments. Early experimentation include dragging a female into the sunlight to see if light really does damage the vampire. Answer: yes, as he watches the still living female scream, whither and die in the sun. He collects some blood from another to see if he can find the root cause of vampirism. Answer: yes, a bacteria he can see and for which he can test. He also experiments on the blood to see if the ingredients in garlic are toxic to the vampire. Answer: no, it seems to be an allergic reaction. In short, without the approval of the subject, without any desire for the good of the subject, Neville performs scientific experiments upon his subjects in an effort to know and understand. The pursuit of knowledge for the sake of knowledge.
Later, he checks the blood of a woman who does not want him to to see if she is a vampire. He does not do this for her sake, but for his. Here we have experiments performed not for the sake of science, nor for the sake of the victim, but for the sake of the experimenter.
Now it turns out that a section of humanity has managed to survive, but with the disease. They develop a medicine to keep it in check, so they do not die of the disease, and as humans with a disease, they function socially just fine. They create a new society. Now suppose Neville found a cure. What if the new vampire-humans do not want to be cured? Is it right for him to force them to be cured against their own will?
I want to point to a few features of the current medical ethics in order to put these points into perspective. First, remember, "I am Legend" is written in the early '50's. Students of medical ethics are well acquainted with the Tuskegee Syphilis study of the 1930's. Here, the question was: what is the natural progression of syphilis in an African American? So they recruited poor black folks under the guise of treating syphilis, paid for by the Federal Government, when in fact, they were given no medication, and watched for years. A few years later, when a cure for syphilis was discovered, they were still kept in the dark, and watched for almost three decades. When the first people started to complain about the ethics of the study, the people in charge of the study reacted angrily, saying they would ruin its results. In other worlds, in a common attitude towards scientific study, the pursuit of knowledge for knowledge sake trumped any concern about the ethical treatment of the patients/subjects/victims. There are many, many examples like this, but perhaps not quite as egregeious.
Neville's attitude towards science and medical ethics fit the 1950's. But our intuitions differ. We hold you must keep the welfare of the subject in mind first and foremost, and we hold that you must have the approval of the subject, made aware of any problems that may occur. Neville does none of that.
Do vampires, as depicted in the book "I am Legend," have rights? Is it immoral to treat them as Neville does?
(Post is too long, I know.)
It is quite common now to understand vampires and zombies in terms of some disease which takes over the body and re-animates it after death. Movies like "night of the Living Dead" and all of its offspring, including "28 days later," as well as books like "World War Z" use this theme. This is not the way vampires were always understood, however. The folklore of vampires is much older than Bram Stoker's Dracula, and the where widely regarded as real creatures in many parts of the world. But instead of disease, the vampire was either a person possessed by a malevolent spirit, or a ghost-like specter. It is only in the late 1800's that the germ theory of disease gains prominence, so the ground for changing our understanding of vampires was not set until then. (By the way, I love the irony of speaking scientifically about fictional entities, and using science to discover which of the myths surrounding vampires are factual, and which are purely mythical, something every vampire book has done since "I am Legend" first did it in 1951.)
For Matheson, there are three kinds of vampires. There are the newly diseased who will eventually die and turn into the undead variety. On the way to this disturbing end, many go mad, as they realize what they have become: flesh eating creatures that would eat their own loved ones if they could. Matheson explains the anti-social, hardly human variety of vampire in that way: they have gone mad. But it is possible not to go mad, or to come out of madness, and still not be undead. These are people simply with a disease that, if untreated, will kill them and turn them into the undead, and the disease will make them yearn, desire, require the blood of a living thing, preferably human, preferably undiseased human, to keep living. The bacteria at the root of vampirism needs blood to survive and prosper. At the death of the human, the corpse reanimates into a being properly called the vampire. The corpse does not breathe, its heart does not beat. This being seems quite rational, remembers events and people from its living days, plans ahead, and interacts socially with other creatures like herself. For example, knowing that Neville is all alone, the women vampires dance seductively, stripping, etc., in an effort to lure Neville out of his home so that they can eat him.
Now as we saw in the last post, Neville has turned himself into a scientist. he discovers many of the things I just described through experiments. Early experimentation include dragging a female into the sunlight to see if light really does damage the vampire. Answer: yes, as he watches the still living female scream, whither and die in the sun. He collects some blood from another to see if he can find the root cause of vampirism. Answer: yes, a bacteria he can see and for which he can test. He also experiments on the blood to see if the ingredients in garlic are toxic to the vampire. Answer: no, it seems to be an allergic reaction. In short, without the approval of the subject, without any desire for the good of the subject, Neville performs scientific experiments upon his subjects in an effort to know and understand. The pursuit of knowledge for the sake of knowledge.
Later, he checks the blood of a woman who does not want him to to see if she is a vampire. He does not do this for her sake, but for his. Here we have experiments performed not for the sake of science, nor for the sake of the victim, but for the sake of the experimenter.
Now it turns out that a section of humanity has managed to survive, but with the disease. They develop a medicine to keep it in check, so they do not die of the disease, and as humans with a disease, they function socially just fine. They create a new society. Now suppose Neville found a cure. What if the new vampire-humans do not want to be cured? Is it right for him to force them to be cured against their own will?
I want to point to a few features of the current medical ethics in order to put these points into perspective. First, remember, "I am Legend" is written in the early '50's. Students of medical ethics are well acquainted with the Tuskegee Syphilis study of the 1930's. Here, the question was: what is the natural progression of syphilis in an African American? So they recruited poor black folks under the guise of treating syphilis, paid for by the Federal Government, when in fact, they were given no medication, and watched for years. A few years later, when a cure for syphilis was discovered, they were still kept in the dark, and watched for almost three decades. When the first people started to complain about the ethics of the study, the people in charge of the study reacted angrily, saying they would ruin its results. In other worlds, in a common attitude towards scientific study, the pursuit of knowledge for knowledge sake trumped any concern about the ethical treatment of the patients/subjects/victims. There are many, many examples like this, but perhaps not quite as egregeious.
Neville's attitude towards science and medical ethics fit the 1950's. But our intuitions differ. We hold you must keep the welfare of the subject in mind first and foremost, and we hold that you must have the approval of the subject, made aware of any problems that may occur. Neville does none of that.
Do vampires, as depicted in the book "I am Legend," have rights? Is it immoral to treat them as Neville does?
(Post is too long, I know.)
Thursday, January 29, 2009
Metaphysics or metaphysics?
How to fail metaphysics at the University of Hawaii (link is a PDF document).
Monday, January 26, 2009
I am Scientist
by Hanno
I am doing some work preparing for a talk I will give in February up at Gettysburg College on the ethics of vampire slaying. There are. however, some peripheral issues that arise in the book "I am Legend," not to be confused with the Hollywood trash by the same name.
In the book, Robert Neville is not a scientist originally, but a normal factory worker who finds himself in LA (Compton), all alone with undead vampires trying to kill him for his blood at night. In the beginning, he spends his days slaughtering vampires the old fashioned way, with a stake through the heart. During the day, they are easy picking, dispatching 47 in just one morning. He does discover that some of the vampires are not quite dead yet. They breathe, for example, but are in a coma like state that all vampires endure for daylight hours. But there are also undead vampires, that do not breathe at all. Some of the live vampires behave oddly, as if they are dazed and confused, hardly rational beings at all, but the undead vampires show high levels of rational behavior. Neville's best friend, for example, turned into an undead vampire, and taunts him each night, calling his name, inviting him to join the forces of the night. Female vampires undress to lure the accidental celibate out his house for a sexual romp, one sure never to be consummated as he would be eaten well before the fun would start. Well, his fun, at any rate.
He has nothing to live for. No hope, no one else, no thing else. He drinks heavily, and tries to avoid thinking. He manages to survive for a year. Then something happens, and he starts to wonder just why vampires do not like garlic. Is it the smell? Is garlic toxic for vampires? This question becomes a series of others. And from here, Neville becomes a scientist. Answering the question "Why?" gives his life meaning. Now Aristotle said long ago that humans have a desire to understand. This desire is at the heart of the philosophical project, and science is an outgrowth, both historically and philosophically, of that desire. Few think of science as a source of existential wisdom, and one may wonder how realistic that may be. Can the desire to understand give us, by itself, give life meaning? Or would most of us (all of us) choose something darker as an alternative?
Neville then simply goes to the library, and reads about biology. He becomes through his studies a biologist. Is this a 50's view of science? Was it true then? Now? He finds a lab, and a working microscope, trains himself to make slides, and prepares himself to do real work: find the truth about vampires. I think this makes far more sense in 1950 than it does today. Biology as a field of study, like all the sciences, has advanced so far, that only highly specialized and trained people can actively do research. I think this is why the movies make the sole survivor already a master scientist. The idea of someone just becoming a biologist in this day and age is apparently more far fetched than the idea that a disease can cause the dead to rise back up and live off the flesh of the living.
Next: the ethics of experimentation on vampires
I am doing some work preparing for a talk I will give in February up at Gettysburg College on the ethics of vampire slaying. There are. however, some peripheral issues that arise in the book "I am Legend," not to be confused with the Hollywood trash by the same name.
In the book, Robert Neville is not a scientist originally, but a normal factory worker who finds himself in LA (Compton), all alone with undead vampires trying to kill him for his blood at night. In the beginning, he spends his days slaughtering vampires the old fashioned way, with a stake through the heart. During the day, they are easy picking, dispatching 47 in just one morning. He does discover that some of the vampires are not quite dead yet. They breathe, for example, but are in a coma like state that all vampires endure for daylight hours. But there are also undead vampires, that do not breathe at all. Some of the live vampires behave oddly, as if they are dazed and confused, hardly rational beings at all, but the undead vampires show high levels of rational behavior. Neville's best friend, for example, turned into an undead vampire, and taunts him each night, calling his name, inviting him to join the forces of the night. Female vampires undress to lure the accidental celibate out his house for a sexual romp, one sure never to be consummated as he would be eaten well before the fun would start. Well, his fun, at any rate.
He has nothing to live for. No hope, no one else, no thing else. He drinks heavily, and tries to avoid thinking. He manages to survive for a year. Then something happens, and he starts to wonder just why vampires do not like garlic. Is it the smell? Is garlic toxic for vampires? This question becomes a series of others. And from here, Neville becomes a scientist. Answering the question "Why?" gives his life meaning. Now Aristotle said long ago that humans have a desire to understand. This desire is at the heart of the philosophical project, and science is an outgrowth, both historically and philosophically, of that desire. Few think of science as a source of existential wisdom, and one may wonder how realistic that may be. Can the desire to understand give us, by itself, give life meaning? Or would most of us (all of us) choose something darker as an alternative?
Neville then simply goes to the library, and reads about biology. He becomes through his studies a biologist. Is this a 50's view of science? Was it true then? Now? He finds a lab, and a working microscope, trains himself to make slides, and prepares himself to do real work: find the truth about vampires. I think this makes far more sense in 1950 than it does today. Biology as a field of study, like all the sciences, has advanced so far, that only highly specialized and trained people can actively do research. I think this is why the movies make the sole survivor already a master scientist. The idea of someone just becoming a biologist in this day and age is apparently more far fetched than the idea that a disease can cause the dead to rise back up and live off the flesh of the living.
Next: the ethics of experimentation on vampires
Thursday, January 22, 2009
Subscribe to:
Posts (Atom)