Browse by Volume
Learning Through Simulation
Mental simulation — such as imagining tilting a glass to figure out the angle at which water would spill — can be a way of coming to know the answer to an internally or externally posed query. Is this form of learning a species of inference or a form of observation? We argue that it is neither: learning through simulation is a genuinely distinct form of learning. On our account, simulation can provide knowledge of the answer to a query even when the basis for that answer is opaque to the learner. Moreover, through repeated simulation, the learner can reduce this opacity, supporting self-training and the acquisition of more accurate models of the world. Simulation is thus an essential part of the story of how creatures like us become effective learners and knowers.
|Sara Aronowitz; Tania Lombrozo||PDF (467kb)|
Binding, Compositionality, and Semantic Values
In this paper, we defend a traditional approach to semantics, that holds that the outputs of compositional semantics are propositional, i.e. truth conditions (or anything else appropriate to be the objects of assertions or the contents of attitudes). Though traditional, this view has been challenged on a number of fronts over the years. Since classic work of Lewis, arguments have been offered which purport to show that semantic composition requires values that are relativized, e.g. to times, or other parameters that render them no longer propositional. Focusing in recent variants of these arguments involving quantification and binding, we argue that a correct understanding of how composition works gives no reason to relativize semantic values, and that propositional semantic values are in fact the preferred option. We take our argument to be mainly empirical, but along the way, we defend some more general theses. Simple propositional semantic values are viable in composition, we maintain, because composition is itself a complex phenomenon, involving multiple modes of composition. Furthermore, some composition principles make adjustments to the meanings of constituents in the course of composition. These adjustments are by triggered syntactic environments. We argue such small contributions of meaning from syntactic structure are acceptable.
|Michael Glanzberg; Jeffrey C. King||PDF (740kb)|
The Principle of Stability
How can inferences from models to the phenomena they represent be justified when those models represent only imperfectly? Pierre Duhem considered just this problem, arguing that inferences from mathematical models of phenomena to real physical applications must also be demonstrated to be approximately correct when the assumptions of the model are only approximately true. Despite being little discussed among philosophers, this challenge was taken up (if only sometimes implicitly) by mathematicians and physicists both contemporaneous with and subsequent to Duhem, yielding a novel and rich mathematical theory of stability with epistemological consequences.
|Samuel C. Fletcher||PDF (336kb)|
On the Open-Endedness of Logical Space
Modal logicism is the view that a metaphysical possibility is just a non-absurd way for the world to be. I argue that modal logicists should see metaphysical possibility as "open ended'': any given possibilities can be used to characterize further possibilities. I then develop a formal framework for modal languages that is a good fit for the modal logicist and show that it delivers some attractive results.
|Agustín Rayo||PDF (310kb)|
The General Theory of Second Best Is More General Than You Think
Lipsey and Lancaster's ``general theory of second best'' is widely thought to have significant implications for applied theorizing about the institutions and policies that most effectively implement abstract normative principles. It is also widely thought to have little significance for theorizing about which abstract normative principles we ought to implement. Contrary to this conventional wisdom, I show how the second-best theorem can be extended to myriad domains beyond applied normative theorizing, and in particular to more abstract theorizing about the normative principles we should aim to implement. I start by separating the mathematical model used to prove the second-best theorem from its familiar economic interpretation. I then develop an alternative normative-theoretic interpretation of the model, which yields a novel second best theorem for idealistic normative theory. My method for developing this interpretation provides a template for developing additional interpretations that can extend the reach of the second-best theorem beyond normative theoretical domains. I also show how, within any domain, the implications of the second-best theorem are more specific than is typically thought. I conclude with some brief remarks on the value of mathematical models for conceptual exploration.
|David Wiens||PDF (253kb)|
Ability and Possibility
According to the classical quantificational analysis of modals, an agent has the ability to perform an act iff (roughly) relevant facts about the agent and her environment are compatible with her performing the act. The analysis faces a number of problems, many of which can be traced to the fact that it takes even accidental performance of an act as proof of the relevant ability. I argue that ability statements are systematically ambiguous: on one reading, accidental performance really is enough; on another, more is required. The stronger notion of ability plays a central role in normative contexts. Both readings, I argue, can be captured within the classical quantificational framework, provided we allow conversational context to impose restrictions not just on the “accessible worlds” (the facts that are held fixed), but also on what counts as a performance of the relevant act among these worlds.
|Wolfgang Schwarz||PDF (222kb)|
Justifying Standing to Give Reasons: Hypocrisy, Minding Your Own Business, and Knowing One's Place
What justifies practices of “standing”? Numerous everyday practices exhibit the normativity of standing: forbidding certain interventions and permitting ignoring them. The normativity of standing is grounded in facts about the person intervening and not on the validity of her intervention. When valid, directives are reasons to do as directed. When interventions take the form of directives, standing practices may permit excluding those directives from one’s practical deliberations, regardless of their validity or normative weight. Standing practices are, therefore, puzzling – forbidding giving (genuine) reasons and, if given, permitting disregarding such reasons. What justifies standing practices are the values that they protect, including privacy, autonomy, independence, valuable relationships, and equal respect. These values count in favor of standing’s duty against certain interventions and, when these duties of non-intervention are breached, the values underpinning those duties count in favor of standing’s permission to discount or exclude those interventions from one’s practical deliberations – the normative weight of those interventions notwithstanding.
|Ori J. Herstein||PDF (438kb)|
New Work For Certainty
This paper argues that we should assign certainty a central place in epistemology. While epistemic certainty played an important role in the history of epistemology, recent epistemology has tended to dismiss certainty as an unattainable ideal, focusing its attention on knowledge instead. I argue that this is a mistake. Attending to certainty attributions in the wild suggests that much of our everyday knowledge qualifies, in appropriate contexts, as certain. After developing a semantics for certainty ascriptions, I put certainty to explanatory work. Specifically, I argue that by taking certainty as our central epistemic notion, we can shed light on a variety of important topics, including evidence and evidential probability, epistemic modals, and the normative constraints on credence and assertion.
|Bob Beddor||PDF (264kb)|
Beyond Binary: Genderqueer as Critical Gender Kind
We want to know what gender is. But metaphysical approaches to this question have focused on the binary gender kinds men and women. By overlooking those who identify outside of the binary--the group I call 'genderqueer'--we are left without tools for understanding these new and quickly growing gender identifications. This metaphysical gap in turn creates a conceptual lacuna that contributes to systematic misunderstanding of genderqueer persons. In this paper, I argue that to better understand genderqueer identities, we must recognize a new type of gender kind: critical gender kinds, or kinds whose members collectively destabilize one or more pieces of dominant gender ideology. After developing a model of critical gender kinds, I suggest that genderqueer is best modeled as a critical gender kind that destabilizes the 'binary axis', or the piece of dominant gender ideology that says that the only possible genders are the binary, discrete, exclusive, and exhaustive kinds men and women.
|Robin Dembroff||PDF (272kb)|
Avicenna's Emanated Abstraction
One of the largest ongoing debates in scholarship on Avicenna (Ibn Sīnā) concerns his epistemology of the first acquisition of intelligible forms or concepts. “Emanationists” hold that intelligibles are emanated by the separate Active Intellect (AI) directly into human minds. “Abstractionists” hold that intelligibles are abstracted by the human intellect from sensory images. Neither of these positions has a satisfactory grip on Avicenna’s philosophy. I propose that the two positions can be reconciled because Avicenna states in many texts that what the AI emanates is a power, and not the various intelligible forms. I argue that this can only be the power of abstraction itself. This new interpretation does greater justice to Avicenna’s system and reveals his unique place in the history of epistemology.
|Stephen R. Ogden||PDF (776kb)|
Belief In Psyontology
Neither full belief nor partial belief is more fundamental, ontologically speaking. A survey of some relevant cognitive psychology supports a dualist ontology instead. Beliefs come in two kinds, categorical and graded, neither more fundamental than the other. In particular, the graded kind is no more fundamental. When we discuss belief in on/off terms, we are not speaking coarsely or informally about states that are ultimately credal.
|Jonathan Weisberg||PDF (251kb)|
Could've Thought Otherwise
Evidence is univocal, not equivocal. Its implications don't depend on our beliefs or values, the evidence says what it says. But that doesn't mean there's no room for rational disagreement between people with the same evidence. Evaluating evidence is a lot like polling an electorate: getting an accurate reading requires a bit of luck, and even the best pollsters are bound to get slightly different results. So, even though evidence is univocal, rationality's requirements are not "unique." Understanding this resolves several puzzles to do with uniqueness and disagreement.
|Jonathan Weisberg||PDF (249kb)|
How to Explain How-Possibly
Explaining how something is possible is a familiar and epistemically important achievement in both science and ordinary life. But a satisfactory general account of how-possibly explanation has not yet been given. A crucial desideratum for a successful account is that it must differentiate a demonstration that something is possible from an explanation of how it is possible. In this paper, I offer an account of how-possibly explanation that fully captures this distinction. I motivate my account using two cases, one from ordinary life and one from ornithology. On my account, a how-possibly explanation is a greater achievement than a mere description of how a state of affairs might possibly obtain. In addition to being a potential explanation of why some state of affairs actually obtains, a how-possibly explanation must involve the relief of an imaginative frustration on the part of its recipient. When a recipient’s imaginative frustration is relieved, she does not just know that the state of affairs in question is possible, but is also able to imagine how it could possibly obtain.
|Lindsay Brainard||PDF (500kb)|
The Arts of Action
The theory and culture of the arts has largely focused on the arts of objects, and neglected the arts of action – or, as I call them, the “process arts”. In the process arts, artists create artifacts to engender activity in their audience, for the sake of the audience’s aesthetic appreciation of their own activity. This includes appreciating their own deliberations, choices, reactions, and movements. The process arts include games, urban planning, improvised social dance, cooking, and social food rituals. In the traditional object arts, the central aesthetic properties occur in the artistic artifact itself. It is the painting that is beautiful; the novel that is dramatic. In the process arts, the aesthetic properties occur in the activity of the appreciator. It is the game player’s own decisions that are elegant, the rock climber’s own movement that is graceful, and the tango dancers’ rapport that is beautiful. The artifact’s role is to call forth and shape that activity, guiding it along aesthetic lines. I offer a theory of the process arts. Crucially, we must distinguish between the designed artifact and the prescribed focus of aesthetic appreciation. In the object arts, these are one and the same. The designed artifact is the painting, which is also the prescribed focus of appreciation. In the process arts, they are different. The designed artifact is the game, but the appreciator is prescribed to appreciate their own activity in playing the game. Next, I address the complex question of who the artist really is in a piece of process art — the designer or the active appreciator? Finally, I diagnose the lowly status of the process arts.
|C. Thi Nguyen||PDF (500kb)|
Austerity and Illusion
Many contemporary theorists charge that naïve realists are incapable of accounting for illusions. Various sophisticated proposals have been ventured to meet this charge. Here, we take a different approach and dispute whether the naïve realist owes any distinctive account of illusion. To this end, we begin with a simple, naïve account of veridical perception. We then examine the case that this account cannot be extended to illusions. By reconstructing an explicit version of this argument, we show that it depends critically on the contention that perceptual experience is diaphanous, or more minimally and precisely, that there can be no difference in phenomenal properties between two experiences without a difference in the scenes presented in those experiences. Finding no good reason to accept this claim, we develop and defend a simple, naïve account of both veridical perception and illusion, here dubbed Simple, Austere Naïve Realism.
|Craig French; Ian Phillips||PDF (443kb)|
Grace and Alienation
According to an attractive conception of love as attention, discussed by Iris Murdoch, one strives to see one’s beloved accurately and justly. A puzzle for understanding how to love another in this way emerges in cases where more accurate and just perception of the beloved only reveals his flaws and vices, and where the beloved, in awareness of this, strives to escape the gaze of others - including, or perhaps especially, of his loved ones. Though less attentive forms of love may be able to render one’s continued love coherent and justifiable in these cases, they risk further alienating the beloved precisely because they are less attentive and because of the operations of the beloved’s shame. I argue that attentive love is well-suited to alleviate this problem of alienation, but that in order to do so, it must be supplemented with grace. I propose a conception of gracious love as an affectionate love for the qualities of human nature, distinguishing this from a love of humanity, and show how this complex emotion, in being responsive to the complexities of shame, is able to alleviate the problem of alienation.
|Vida Yao||PDF (409kb)|
Is the Attention Economy Noxious?
A growing amount of media is paid for by its consumers through their very consumption of it. Typically, this new media is web-based and paid for by advertising. It includes the services offered by Facebook, Instagram, Snapchat, and YouTube. We offer an ethical assessment of the attention economy, the market where attention is exchanged for new media. We argue that the assessment has ethical implications for how the attention economy should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market tends to engender extremely harmful outcomes for individuals or society as a whole. The other is the “agency” criterion, which relates not to the outcomes of the market, but rather, to whether it somehow reflects or has its source in weakened agency. We argue that the attention economy animates concerns with respect to both criteria and that new media should be subject to the same sort of regulation as other harmful, addictive products.
|Clinton Castro; Adam K. Pham||PDF (363kb)|
Stoic Virtue: A Contemporary Interpretation
The Stoic understanding of virtue is often taken to be a non-starter. Many of the Stoic claims about virtue -- that virtue requires moral perfection and that all who are not fully virtuous are vicious -- are thought to be completely out of step with our commonsense notion of virtue, making the Stoic account more of an historical oddity than a seriously defended view. Despite many voices to the contrary, I argue that there is a way of making sense of these Stoic claims. Recent work in linguistics has shown that there is a distinction between relative and absolute gradable adjectives, with the absolute variety only applying to perfect exemplars. In this paper, I show that taking virtue terms to be absolute gradable adjectives -- and thus that they apply only to those who are fully virtuous -- is one way to make sense of the Stoic view. I also show how interpreting virtue-theoretic adjectives as absolute gradable adjectives makes it possible to defend Stoicism against its most common objections, demonstrating how the Stoic account of virtue might once again be a player in the contemporary landscape of virtue theorizing.
|Robert Weston Siscoe||PDF (229kb)|
Clear and distinct perception is the centerpiece of Descartes’s philosophy — it is the source of all certainty — but what does he mean by ‘clear’ and ‘distinct’? According to the prevailing approach, what it means for a perception to be clear is that its content has a certain objective property, like truth. I argue instead that clarity is a subjective, phenomenal quality whereby a content is presented as true to the perceiving subject. In the special case of completely clear intellectual perception, what is presented as true must be true. Further, I argue that the other perceptual qualities that Descartes identifies — obscurity, confusion, and distinctness — are all defined in terms of clarity. Of particular note is the fact that distinctness is not a positive feature to be added to clarity: a distinct perception is just a completely clear perception.
|Elliot Samuel Paul||PDF (713kb)|
Absolute Prohibitions Under Risk
Absolutism is the view that some actions are forbidden, no matter how much good they could bring about. An absolutist would forbid intentionally killing an innocent person, no matter how many other innocents you could save by doing so. Jackson & Smith (2006), Huemer (2010), and Isaacs (2014) argue that absolutism has special problems handling cases where it is not certain whether your actions violate the prohibition. I show that there is no special problem for absolutism. Absolutists can handle risky cases in a principled way. First, absolutists need to identify a point of moral certainty: if an action is sufficiently likely to violate an absolute prohibition, it can be treated as if it will. Second, they need to identify a point of hope: if an action is sufficiently unlikely to violate a prohibition, it can be treated as if it won't. From there, absolutists can generate an expected value function. Whatever problems absolutism has, they do not arise from risk. I prove three theorems concerning the existence and uniqueness of expected value representations for absolutism. I also defend the theory against the objections that it violates agglomeration principles, fails in the long run, and sacrifices the motivations for absolutism.
|D Black||PDF (289kb)|
There has recently been a flurry of activity in the philosophy of language on how to best account for the unique features of epithets. One of these features is that epithets can be appropriated (that is, the offense-grounding potential of a term can be removed). We argue that attempts to appropriate an epithet fundamentally involve a violation of language-governing rules. We suggest that the other conditions that make something an attempt at appropriation are the same conditions that characterize acts of civil disobedience. Accounting for attempts at appropriation is thus both a linguistic and socio-political endeavor. We demonstrate how these two facets of attempts at appropriation also help us understand the communicative features of civil disobedience.
|David Miguel Gray; Benjamin Lennertz||PDF (388kb)|
Against Conventional Wisdom
Conventional wisdom has it that truth is always evaluated using our actual linguistic conventions, even when considering counterfactual scenarios in which different conventions are adopted. This principle has been invoked in a number of philosophical arguments, including Kripke’s defense of the necessity of identity and Lewy’s objection to modal conventionalism. But it is false. It fails in the presence of what Einheuser (2006) calls c-monsters, or convention-shifting expressions (on analogy with Kaplan’s monsters, or context-shifting expressions). We show that c-monsters naturally arise in contexts where speakers entertain alternative conventions, such as metalinguistic negotiations. We develop an expressivist theory — inspired by Barker (2002) and MacFarlane (2016) on vague predications and Einheuser (2006) on counterconventionals — to model these shifts in convention. Using this framework, we reassess the philosophical arguments that invoked the conventional wisdom.
|Alexander W. Kocurek; Ethan Jerzak; Rachel Etta Rudolph||PDF (198kb)|
Directed Duties and Moral Repair
Many moral duties are directed: if J promises S that J will phi, then J owes it to S to phi. What does directedness add to a duty? One way to answer this question is by understanding the practical difference made by directedness, and the importance of acknowledging that difference. What practical difference does it make that a duty is directed? If J owes it to S to phi then S has special standing in our practice of accountability and moral repair. In particular, S is the proper recipient of apology and redress, and S has the power to forgive J. This is a more illuminating version of the common suggestions that S has special standing to blame J for not phiing, or to demand that J phi, or to claim J’s phiing. Why then does directedness matter? A practice of accountability that gives special standing to S makes available a distinctive form of recognition that comes as close as is possible to repairing the original wrongdoing. Without directed duties, we would stand to lose this form of moral repair, and to lose sight of the interest that human beings have in recognition. The interest in recognition can itself be vindicated in Strawsonian fashion. Also, recognition is a component of respect, and so we can make sense of Feinberg's claim that there is a connection between respect and directedness.
|Julian Jonker||PDF (587kb)|
Deepfakes and the Epistemic Backstop
Deepfake technology uses machine learning to fabricate video and audio recordings that represent actual people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the ever-present possibility of recordings of the events they testify about. As deepfakes erode the epistemic value of recordings, we may then face an even more consequential challenge to the reliability of our testimonial practices.
|Regina Rini||PDF (369kb)|
The Value of Thinking and the Normativity of Logic
I. This paper is about how to build an account of the normativity of logic around the claim that logic is constitutive of thinking. I take the claim that logicis constitutive of thinking to mean that representational activity must tend to conform to logic to count as thinking. II. I develop a natural line of thought about how to develop the constitutive position into an account of logical normativity by drawing on constitutivism in metaethics. III. I argue that, while this line of thought provides some insights, it is importantly incomplete, as it is unable to explain why we should think. I consider two attempts at rescuing the line of thought. The first, unsuccessful response is that it is self-defeating to ask why we ought to think. The second response is that we need to think. But this response secures normativity only if thinking has some connection to human flourishing. IV. I argue that thinking is necessary for human flourishing. Logic is normative because it is constitutive of this good. V. I show that the resulting account deals nicely with problems that vex other accounts of logical normativity.
|Manish Oza||PDF (487kb)|
"I Am the Original of All Objects": Apperception and the Substantial Subject
Kant’s conception of the centrality of intellectual self-consciousness, or "pure apperception", for scientiﬁc knowledge of nature is well known, if still obscure. Here I argue that, for Kant, at least one central role for such self-consciousness lies in the acquisition of the content of concepts central to metaphysical theorizing. I focus on one important concept, that of <substance>. I argue that, for Kant, the representational content of the concept <substance> depends not just on the capacity for apperception, but on the actual intellectual awareness of oneself in such apperception. I then defend this interpretation from a variety of objections.
|Colin McLear||PDF (218kb)|
A Direction Effect on Taste Predicates
The recent literature abounds with accounts of the semantics and pragmatics of so-called predicates of personal taste, i.e. predicates whose application is, in some sense or other, a subjective matter. Relativism and contextualism are the major types of theories. One crucial difference between these theories concerns how we should assess previous taste claims. Relativism predicts that we should assess them in the light of the taste standard governing the context of assessment. Contextualism predicts that we should assess them in the light of the taste standard governing the context of use. We show in a range of experiments that neither prediction is correct. People have no clear preferences either way and which taste standard they choose in evaluating a previous taste claim crucially depends on whether they start out with a favorable attitude towards the object in question and then come to have an unfavorable attitude or vice versa. We suggest an account of the data in terms of what we call hybrid relativism.
|Alexander Dinges; Julia Zakkou||PDF (512kb)|
Reasoning, Defeasibility, and the Taking Condition
Reasoning is a way of forming or revising attitudes such as beliefs and intentions. But what sets reasoning apart from other ways of forming or revising attitudes? According to the Taking Condition, an agent’s response does not count as an instance of reasoning unless the agent takes it that her circumstances warrant that response. While initially attractive to many, the Taking Condition has also faced a lot of criticism in the literature. This paper suggests a novel way of motivating the Taking Condition. More specifically, it argues that recognizing the pervasive defeasibility of human reasoning provides strong reasons to accept the Taking Condition.
|Markos Valaris||PDF (366kb)|
Fictional Expectations and the Ontology of Power
What kind of thing, as it were, is power and how does it fit into our understanding of the social world? I approach this question by exploring the pragmatic character of power ascriptions, arguing that they involve fictional expectations directed at an open future. When we take an agent to be powerful, we act as if that agent had a robust capacity to make a difference to the actions of others. While this pretense can never fully live up to a social reality whose future is open, acting on such expectations helps constitute social order. Fictional expectations are thus built into the material practices that constitute power. This account, I argue, helps us make sense of some of the deep disagreements about the nature of power. I develop the account by drawing on Thomas Hobbes’s myth of an original institution of sovereign power before expanding it to other forms of power.
|Torsten Menge||PDF (453kb)|
Can Imprecise Probabilities Be Practically Motivated?: A Challenge to the Desirability of Ambiguity Aversion
The usage of imprecise probabilities has been advocated in many domains: A number of philosophers have argued that our belief states should be “imprecise” in response to certain sorts of evidence, and imprecise probabilities have been thought to play an important role in disciplines such as artificial intelligence, climate science, and engineering. In this paper I’m interested in the question of whether the usage of imprecise probabilities can be given a practical motivation (a motivation based on practical rather than epistemic, or alethic concerns). My aim is to challenge the central motivation for using imprecise probabilities in decision-making that has been offered in the literature: the idea that, in at least some contexts, it’s desirable to be ambiguity averse. If I succeed, this will show that we need to reconsider whether there are good reasons to use imprecise probabilities in contexts in which making good decisions is what's of primary concern.
|Miriam Schoenfield||PDF (474kb)|
Stoic Logic and Multiple Generality
We argue that the extant evidence for Stoic logic provides all the elements required for a variable-free theory of multiple generality, including a number of remarkably modern features that straddle logic and semantics, such as the understanding of one- and two-place predicates as functions, the canonical formulation of universals as quantified conditionals, a straightforward relation between elements of propositional and first-order logic, and the roles of anaphora and rigid order in the regimented sentences that express multiply general propositions. We consider and reinterpret some ancient texts that have been neglected in the context of Stoic universal and existential propositions and offer new explanations of some puzzling features in Stoic logic. Our results confirm that Stoic logic surpasses Aristotle’s with regard to multiple generality, and are a reminder that focusing on multiple generality through the lens of Frege-inspired variable-binding quantifier theory may hamper our understanding and appreciation of pre-Fregean theories of multiple generality.
|Susanne Bobzien; Simon Shogry||PDF (888kb)|
On Socrates' Project of Philosophical Conversion
There is a wide consensus among scholars that Plato’s Socrates is wrong to trust in reason and argument as capable of converting people to the life of philosophy. In this paper, I argue for the opposite. I show that Socrates employs a more sophisticated strategy than is typically supposed. Its key component is the use of philosophical argument not to lead an interlocutor to rationally conclude that he must change his way of life but rather to cause a certain affective experience, one that can be effective at changing his beliefs about how best to live.
|Jacob Stump||PDF (492kb)|
Optimism About Moral Responsibility
In his classic “Freedom and Resentment,” P. F. Strawson introduces us to an optimist who believes that our moral responsibility practices are justified by their beneficial consequences. Although many see Strawson as a staunch critic of this consequentialist position, his stated view is only that there is a gap in the optimist’s story where the reactive attitudes should be. In this paper, I fill in the gap. I show how optimism can be suitably modified to reflect an appreciation of the reactive attitudes. And I argue that the ensuing position—on which our moral responsibility practices, taken as a whole, are justified both by their regulation of behavior and by their enabling of interpersonal relationships—provides us not only with a plausible justification of our moral responsibility practices, but also with a fruitful framework for evaluating potential reforms.
|Jacob Barrett||PDF (393kb)|
Freedom of Expression and the Liberalism of Fear: A Defense of the Darker Mill
Although many recent free speech skeptics claim Millian credentials, they neglect the more pessimistic elements of Mill's account of human nature. Once we recover the darker elements of Mill's thought, American-style laissez-faire in the domain of expression looks significantly more attractive. Indeed, this paper argues that if Mill is correct about human nature, we have good reason to oppose recent proposed restrictions on expression and to embrace a legal regime that tolerates much speech that is false, obscene, demeaning, and even hateful. While philosophers are right to worry about the substantial moral costs of such regimes, we ought to attempt to address these costs in ways that do not amount to rejecting the regimes themselves.
|J. P. Messina||PDF (404kb)|