My MOOC experiences

conversations and learning in the digital world


Morality: Objective, Emotive or Relative? #introphil

We all live with some sense of what is good or bad, some feelings about which ways of conducting ourselves are better or worse. But what is the status of these moral beliefs, senses, or feelings? Should we think of them as reflecting hard, objective facts about our world, of the sort that scientists could uncover and study? Or should we think of moral judgements as mere expressions of personal or cultural preferences? This week we’ll survey some of the different options that are available when we’re thinking about these issues, and the problems and prospects for each.

Empirical judgements are based on scientific testing or practical experiences. They are derived from experiment and observation rather than theory.

Examples of empirical judgements:

  • The earth and other planets rotate around sun.
  • There are + and – electrical charges.
  • Some traits in plants are genetically inherited.
  • The so-called “God” particle is real.
  • The sky is blue.
  • The book is on the desk.

Moral judgments are evaluations or opinions formed as to whether some action or inaction, intention, motive, character trait, or a person as a whole is (more or less) Good or Bad as measured against some standard of Good. –

Moral judgements are an evaluation of someone or something as good, bad or right, wrong.

Examples of moral judgements:

  • Giving to charity is morally good.
  • Taking care of your children is morally required.
  • Protesting injustice is morally right.
  • Cain killing Abel out of jealousy was morally wrong.
  • Oedipus sleeping with his mother was morally bad.
  • Genocide is morally abhorrent.
  • Polygamy is morally dubious.

Status of Morality:

We make moral judgements in everyday life. Here, we won’t be discussing whether these moral judgements are correct or incorrect. Rather, we will be asking the status of these judgments. What are we doing when we make such judgements? Are we representing objective facts of matter? Or are we describing our personal or cultural practices? Are we depicting some element of the universe out there? Are we expressing our emotions toward things? These are the types of questions that we ask, when we ask about the status of morality. What exactly are we asking, when we ask about the status of morality?

Three questions about these judgments:

  • Are they the sorts of judgments that can be true or false – or are they mere opinion?
  • If they can be true/false, what makes them true/false?
  • If they are true, are they objectively true?

Three philosophical approaches to the status of morality:

Objectivism: our moral judgments are the sorts of things that can be true or false, and what makes them true or false are facts that are generally independent of who we are or what cultural groups we belong to – they are objective moral facts. It is the view that there are universal moral principles, valid for all people and all situations and times. It holds that moral principles have objective validity and that this validity is independent of cultural acceptance. Moral principles are universal but with some exceptions.

Relativism: our moral judgments are indeed true or false, but they’re only true or false relative to something that can vary between people.

Cultural Relativism: our moral judgments are indeed true or false, but they’re only true or false relative to the culture of the person who makes them.

Subjectivism: our moral judgments are indeed true or false, but they’re only true or false relative to the subjective feelings of the person who makes them. “X is bad” = “I dislike X”. Subjectivism is a form of relativism

Emotivism: moral judgments are neither objectively true/false nor relatively true/false.  They’re direct expressions of our emotive reactions. It is essentially the denial or moral truth.

So then is Morality Objective, Subjective or Emotive?

Objections to these approaches:

Objectivism: our moral judgements are the sorts of things that can be true or false, and what makes them true or false are facts that are generally independent of who we are or what cultural groups we belong to – they are objective moral facts.

Challenge to Objectivism: Important difference between

  • how we determine whether something’s morally right/wrong
  • how we determine whether an empirical claim is true/false

With a moral judgment, like, genocide is morally abhorrent, or polygamy is morally dubious, if somebody disagrees with you, it seems difficult to know what method we would use to settle the issue. How do we figure out who’s right about the issue? It doesn’t look like we can observe the world and find the moral facts in the same way that we can with the empirical facts. Can Objectivists explain this intuitive difference?

Relativism: our moral judgements are indeed true or false, but they’re only true or false relative to something that can vary between people.

Challenge to Relativism: It seems like there’s such a thing as moral progress. Can Relativists explain this possibility? For example, in the past people thought that slavery was perfectly fine but now we think of slavery as morally abhorrent. That seems like a piece of moral progress, we’ve gone from a bad view to a good view. But if the relativist view is right, somebody in the past said slavery is morally okay, that could be true relative to that culture. Whereas someone now says slavery is, slavery’s morally wrong, that could be true relative to our culture. So, there’s a sort of difference in opinion, but there’s no progress in opinion. So the basic challenge here for the relativists is to explain the possibility of moral progress.

Emotivism: moral judgements are neither objectively true/false nor relatively true/false.  They’re direct expressions of our emotive reactions.

Challenge to Emotivism: It seems like we can use reason to arrive at our moral judgments like we use reason to arrive at our empirical judgments. But how can Emotivists explain this intuitive similarity?

We sometimes reason our way to our moral views, our moral opinions. But if emotivism is right, then our moral opinions are just a mode of reactions, they are not reasoned responses to questions about morality. So think about the example of Oedipus is sleeping with his mother Jocasta, was morally bad. He might have initially thought, that’s right, but then reasoning, Oh well, Oedipus didn’t know it was his mother and so it wasn’t culpable what he did. Come to think, well, it wasn’t morally bad. So that kind of transition, changing your mind through reason, is really hard for the emotivist to explain because the emotivist thinks that the ultimate judgements that you make when you make moral judgements, they are emotive reactions, not reasoned responses to beliefs about the way things are, with morality. So, the basic challenge to emotivism is to explain how we can reason to our moral views.

Leave a comment

What is Knowledge? And Do We Have Any?

We know a lot of things – or, at least, we think we do. Epistemology is the branch of philosophy that studies knowledge; what it is, and the ways we can come to have it. This week, we’ll look at some of the issues that arise in this branch of philosophy. In particular, we’ll think about what radical scepticism means for our claims to knowledge. How can we know something is the case if we’re unable to rule out possibilities that are clearly incompatible with it?

In this week we learn about three important aspects about the theory of knowledge:

  • The basic constituents of knowledge
  • The Gettier Problem
  • Problem of Radical Skepticism


The Basic Constituents of Knowledge

All knowledge is information. Not all information is knowledge. In this world of information overload, we need to be able to distinguish between good information and bad information and how we can use it. And that’s why knowledge is so important. Identifying what is good information is one of the reasons why philosophers are very interested in trying to determine exactly what knowledge is.

A particular fundamental way in which we use the word knows, is what’s called Propositional Knowledge, which is knowledge that something is the case. In order to know what propositional knowledge is, we need understand what a proposition is.

What is a Proposition?

A proposition is what is expressed by a declarative sentence, which is a sentence that declares that something is the case. A proposition is either true or false.

Examples of sentences that express a proposition:

  • The cat is on the mat.
  • Your dinner is in the oven.
  • The moon is made of cheese.

Examples of sentences that don’t express propositions:

  • Shut that door.
  • Yes please.
  • How can I help you?

Propositional knowledge can be true or false. So, a sentence like Shut that door is not the sort of thing that can be true or false, because it doesn’t describe the world as being a certain way. But a sentence like the cat is on the mat, which could be true, there is a cat on the mat. Or it could be false, there isn’t a cat on the mat. If you have propositional knowledge of this proposition, then you know that the cat is on the mat.

One way of getting a handle on what propositional knowledge involves is to contrast it to another kind of knowledge called know how or ability knowledge.

Propositional versus Ability Knowledge                        

Knowledge-that Knowledge how
  • Knowing that the earth orbits the sun
  • Knowledge how to drive
  • Knowing that Paris is the capital of France
  • Knowing how to play piano
  • Knowing that one has toothache
  • Knowing how to beat the stock market

Basic constituents of knowledge

1.) Propositional Knowledge – or Knowledge that something is the case expressed by a declarative sentence describing world as being certain way.

2.) Know-How or Ability Knowledge – contrasts with propositional knowledge as Know How connects with manifestation of ability or skill.

Two conditions for Propositional Knowledge

One can know a proposition only if:

Source: Wikimedia

Source: Wikimedia

  1. That proposition is true;
  2. One believes that proposition.

1.) Truth– if you know a proposition then that proposition must be true. So, if proposition knowledge requires truth, you can’t know a falsehood.

2.) Belief– If you know a proposition, then you must believe that proposition. Knowledge requires belief but is stronger than belief.


Knowledge and Certainty

When we say that knowledge requires truth, what we mean by that is that you can’t know a falsehood. In particular, we’re not suggesting that when you know you must be infallible, or that you must be absolutely certain.

For example,
Do you know what you had for breakfast this morning?
But are you certain about this? Isn’t it possible that you have made a mistake?
The moral: while knowledge demands truth, it doesn’t require certainty (any more than it requires infallibility).


Knowledge and Probability

Knowing that a proposition is true is not the same as knowing that this proposition is probable. Consider the claim that human beings have been to the moon, and compare that with the claim that it’s likely or probable that human beings have been to the moon.

There are two probabilities in this proposition:

  1. I know that human beings have been to the moon.
  2. I know that it is likely/probable that human beings have been to the moon.

The second claim is much weaker than the first. It implies a doubt about the probability of human beings having been to the moon.

Also, we don’t mean that knowledge that a proposition is likely or probable. Sometimes we hedge the things we know if we’re not sure about something that what it is we know is just simply likely or probable. We don’t know the proposition but we do know it in this hedge form. Although sometimes it’s appropriate to do that, it does not mean it’s always appropriate to know in a qualified form. But, in lots of cases we do know things without the qualification or hedge.

So, knowledge requires truth, belief-true beliefs. Knowledge requires getting it right. If you don’t get it right, you don’t have true belief then you’re not in market for knowledge.

Is there more to knowing than getting it right? There all kinds of ways of getting right. If I have a true belief, it doesn’t count as knowing. (Sometimes you believe without knowing.) One can get it right in lots of ways which wouldn’t suffice for knowledge.

For example, imagine a juror in a criminal trial. A juror believes the defendant is guilty purely out of prejudice. As it happens, he is right. But clearly he does not know that the defendant is guilty. Then there is another juror who formed his opinion based on critical thinking about evidence. One formed his opinion based on prejudice; the other based it on critically thinking about the evidence. Both jurors get it right but first juror got it right because of prejudice (luck) (He doesn’t know). Second juror got it right based on sifting through evidence (anti-luck) (He knows.) So, for epistemologists, knowledge requires more than just getting it right, more than just true beliefs; it requires attending to evidence, thinking things through to reach a correct judgement.

Two Intuitions that Govern Our Thinking about Knowledge

  1. Anti-luck Intuition – When you know you’re getting it right, your true belief isn’t a matter of luck. (2nd juror example.)
  2. Ability Intuition – When you know your knowing is down to the exercise of your cognitive abilities.

Knowledge requires true belief. It requires getting it right. But there’s actually more to knowing than to merely getting it right.

So then what do we need to add to true belief in order to get knowledge?


Classical Account of Knowledge:

The classical answer to the question of what you need to add to true belief to get knowledge is justification. In addition to true belief, a proposition must also be justified. A belief is said to be justified if it is obtained in the right way. Justification means that beliefs are based on reasoning and evidence rather than luck or misinformation.

It is sometimes called the tripartite or three-part account of knowledge. It goes right back to antiquity, back to Plato.

Nature of propositional knowledge:

  • Truth
  • Belief
  • Justification


The Gettier Problem

For some time, justified true belief account was widely believed to capture the nature of knowledge. In 1963, Edmund Gettier published a short but widely influential article that contrasted this notion. He provided examples of cases in which the subject had true and justified belief yet we cannot agree that he has knowledge about it.

Gettier Problem challenges Classical Account of Knowledge – Edmund Gettier proposed cases where agents have JTB (Justified True Belief) but they don’t know. The agents don’t know because it’s just a matter of luck.

So, what constitutes adequate justification so knowledge is not based on luck?

Source: Wikimedia

Source: Wikimedia

Examples given 1.) The Stopped Clock Case – first offered by Bertrand Russell. Someone forms a belief about what time it is by looking at a stopped clock. Their belief is true based on luck of looking at clock at time of day when clock is right. Agent has no reason to doubt clock’s reliability. This assumption informs agent’s belief. Counterexample of the anti-luck intuition. Agent has JTB but no knowledge only a TB based on luck.

Example 2.) The Farmer offered by Roderick Chisholm. A farmer looks onto a field and believes he sees a sheep; however, what he may be seeing is a sheep-like object, or real sheep hiding behind the sheep like object. He’s got a JTB that could easily be false. Is the farmer really believing there is a sheep or wanting to believe the thing out there is a sheep? (Do we perceive what we want to believe?)

There is a two-step formula to form Gettier type problems:

  • Step1: have an agent form your beliefs in such a way where the belief would normally be false. Ex. Farmer and clock— agents convince themselves what they see is a sheep/right time because of context; have no reason not to form these beliefs.
  • Step2: turn the false belief into true belief using reasons that are different from the justification provided by the subject.

Keith Lehrer proposed we need to add 4th condition which says your belief is not based on any false assumption or false lemmas.

No false lemmas view holds that Knowledge becomes JTB where TB is not based on any false lemmas.

Lemmas (Assumptions) Narrow way of thinking of assumptions – clock example, the agent assumes clock is working; no reasons to think it stopped working. Broad conception of assumptions – an assumption is some false belief germane to target belief formed in the ettier case which is false. There are always assumptions at play in forming our beliefs. Assumptions don’t always generate right kind of result.

Two main questions raised by Gettier Problem:

  1. Whether or not justification is necessary for knowledge. What do we require of knowledge over and above true belief?
  2. If justification condition doesn’t eliminate knowledge undermining luck; if justification by itself can’t respond to our intuition, it can’t explain how and when we know we’ve got TB that’s it’s down to luck. What kind of condition would do that? How do we know our TB isn’t down to luck?

Gettier’s cases show we can JTB that are down to luck. What condition must we add to knowledge to be confident we’ve got cognitive success, TB that isn’t down to luck?


Do we have any knowledge?

Radical scepticism is the view that knowledge (at least of the world around us) is impossible that is, we don’t know anything and we couldn’t know anything.  Sceptics make use of sceptical hypotheses, scenarios where everything is as it usually appears to be, but where we are being radically deceived. The sceptic says that we cannot rule-out sceptical hypotheses, and thus argues that we are unable to know anything about the world around us.

Source: Wikipedia

Source: Wikipedia

Brain-in-a-Vat Sceptical Argument
The Brain in a Vat hypothesis is something similar to what is shown in the film Matrix. The idea is that what you see around in this world and all of your experience are not true. None of this is taking place. In fact your brain has been harvested; it has been taken out of your skull and it has been put in a vat of nutrients and it is being fed experiences, fake experiences. The brain-in-a-vat is floating around there and it thinks that it is out in the world interacting with other people, it thinks that it’s seeing things and doing things, but in fact nothing of the sort is taking place. The brain-in-a-vat has radically false beliefs. And yet their experiences are indistinguishable from the experiences we’re having right now, which one would hope aren’t brain-in-a-vat-type experiences.

So here’s the question the sceptic asks. They say “How do you know you’re not a brain-in-a-vat?” And of course the answer to that is “Well probably we don’t.” So the sceptical argument is that:

  1. I don’t know that I’m not a brain-in-a-vat.
  2. If I don’t know that I’m not a brain-in-a-vat, then I don’t know very much.
  3. So, I don’t know very much.

Even if we don’t know that we’re not brains-in-vats, so what? But if you were a brain-in-a-vat, then you wouldn’t have hands (since brains-in-vats are handless by definition). So how do you know that you have hands? (And if you don’t know this, what do you know?)

Epistemic Vertigo

Source: Wikipedia

Source: Wikipedia

So, the classical theory of knowledge has a flaw and the sceptics want us to believe that we are a brain-in-a-vat, but we all know/feel/believe that our lived experiences are real. It seems that if we cannot rule-out these hypotheses, then much of what we think we know is under threat.

Epistemic vertigo sets in when you stop to consider that this is actually a knowable proposition, a true belief, contrary to every logical truth about us being possible brain-in-a-vat and actually encompasses the entire universe of possibilities.


Concluding thoughts:

  • Radical scepticism is the view that we know very little, if anything, about the world around us.
  • Radical scepticism makes use of sceptical hypotheses, which are scenarios indistinguishable from ordinary life but where we are radically in error.
  • It seems that if we cannot rule-out these hypotheses, then much of what we think we know is under threat.