The added hour of sleep gifted to us this weekend was no doubt welcomed by you all. It was first introduced in the UK in 1916 due to the work of the then late William Willett as it better aligns our time with the natural daylight hours in which the sun is up during the winter. This arbitrary adjust of time highlights the nature of time as a social construct that is maintained by us for convenience and social interaction.
The history of unified time begins with the Greenwich Mean Time (GMT) in the 19th-century to allow for the creation of train timetables in the UK. This led to time zones across the globe to help with trade and navigation. But the drawing up of time zones appears at time to be quite random and has even been politicised. Spain occupies the same longitudes as the UK, yet it is one hour ahead on European time. This was adopted in World War II during the Franco regime as a display of their political alignment with Germany and this has simply remained.
The measurement of time to align with the sun goes back much further than this with the use of sundials and the most accurate example of this are meridian sundials. These sundials are a fine line across which the sunlight will cross exactly at the solar noon. These were used to set manual clocks to for local time. But this measurement is inaccurate as the sundial relies upon the speed of the sun’s movement across the sky to be constant. This is untrue due to the Earth being in an elliptical orbit and the tilt of the Earth’s axis.
The current most accurate form of time keeping are atomic clocks that use the repeating signal of electrons in atoms emitted when they change energy levels. But even this is not perfect and scientists at the International Earth Rotation and Reference Systems Service still look to the sky at distant quasars and calculate their position to determine the adjusts needed to our time in order to maintain its alignment with natural daylight, which they do with ‘leap seconds’.
Physics applicants can study the calculations required to recognise changes in distant quasars placement in the sky. HSPS and Philosophy students can consider the social purpose of time and the need for accuracy in modern society.
The inclusion or exclusion of refugees in contemporary Europe has been a contentious topic of debate and is high on the international agenda. With faster extradition procedures and more limited border crossings, the attitude of Western governments to asylum applications has seemed increasingly restrictive.
Economic and educational inclusion are often given precedence over other areas, but a new project is focusing on social and cultural inclusion. Started in Berlin, ‘Multaka: Museums as Meeting Point’ aims to encourage the sharing of historical and national knowledge. The concept is to train Syrian and Iraqi refugees to become museum guides so that they can take tours in their mother tongues.
The course is currently offered to young adults and teenagers, but will be extended to older applicants as the programme develops. The tours play on the relationship between the host country and the guides’ home countries. Those involved in organising the project hope that it will help to improve native residents’ knowledge of other cultures and give refugees a sense of pride and involvement in their new community.
At this time there are four German museums that are involved in the Multaka, but more are soon to be added with the Pitt Rivers Museum and the Museum of the History of Science in Oxford developing their own version of the scheme. There have also been talks with the Louvre and MoMA. Of the four museums currently taking part, two focus on Syrian and Iraqi artefacts and two on the connections between Islam, Judaism and Christianity.
Students aspiring to study Philosophy and Theology could prepare for their interviews by examining how different religions and religious artefacts are perceived from different cultural stand points. Art Historians might like to consider how museums can be regarded as a meeting point (multaka) for our common past. Those aiming to study HSPS could investigate the social and political impact of such a programme as this.
Do animals grieve? The question is a contentious one among scientists. In the era of the internet, the emotional lives of animals are on display like never before; pets welcoming owners home or even appearing to say “I love you”, wild animals cuddling with favoured humans. More recently we’ve been following the plight of the orca known as J35, who carried her dead calf for 17 days after its death. The story struck a chord with many, but divided scientists; was J35 really grieving for her child, or were we merely projecting our own emotions and rituals onto the animal kingdom? Some zoologists, such as Jules Howard, warn against anthropocentrism in our interpretation of animal behaviour. Howard argues that there is very little scientific evidence behind this, only our own desire to see ourselves reflected in our furry friends. After all, he says, this kind of behaviour has only been displayed a few times. Are all other orca mothers coldly indifferent to their dead offspring? It’s perfectly possible that J35 was simply confused. “If you believe J35 was displaying evidence of mourning or grief”, he says, “you are making a case that rests on faith not on scientific endeavour, and that makes me uncomfortable as a scientist”.
Others disagree, arguing that the scientific evidence for death-related animal behaviours is lacking simply because we are not looking for it. According to this view plenty of anecdotal evidence exists, which could be studied in greater detail were it not for the scientific bias against the idea that non-human animals could experience grief and sadness. Famously, when Koko the gorilla (who could communicate using sign language) heard of her pet kitten’s death, her instructor reported that she signed “bad, sad, bad” and “frown, cry, frown, sad”. Chimpanzees have also been seen to react to the deaths of other chimpanzees, cleaning the body and avoiding the area for a few days. This does not of itself indicate grief, although it could suggest that some species observe social and familial bonds after death or that they have death-related rituals which could be compared in some ways to human funerals.
Applicants for Biology, especially those interested in zoology, might wish to look into the debate surrounding death-related animal behaviour. Do you think it is scientifically valid to assign any human emotion to animals? Those interested in bioethics could consider the issue from a moral perspective—what would it mean for our treatment of animals if they could indeed feel grief?
Intelligent design, often used as an argument for the existence of God, posits that the natural world shows signs of having been designed by some form of intelligence rather than as the result of an undirected process such as natural selection. Proponents often cite the harmony and complexity of various elements of the human body. Intelligent design itself has received very little scientific support. But such issues have recently come to the fore since astronaut Tim Peake stated, “I’m not religious [but] it doesn’t necessarily mean that I don’t seriously consider that the universe could have been created from intelligent design”. Further to arguing that intelligent design has no evidential basis, many opponents are taking issue with the very idea that the human body and nature in general shows signs of perfection (and therefore of creative intelligence). Clearly, all human systems have flaws that allow them to malfunction. Our cells can develop cancer, our immune system can attack us, our eyes fail us. Evolutionary biologist Matan Shelomi asks, “who designed these faulty things? The answer can’t be a God, because a God so incompetent in designing vision sensors isn’t worth worshiping.”
It must be said that such critics have somewhat misrepresented the argument about the human eye and the main thrust of the intelligent design theory, which is less about perfection and more about complexity and codependence of elements. ‘Irreducible complexity’ describes a system in which all the parts work together to achieve a certain function, and which would not work if any one small part were omitted; advocates for intelligent design propose that many biological mechanisms can be described in this way, and therefore that natural selection could not bring about these mechanisms. Darwin himself conceded, “if it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down”; however, he added, “I can find out no such case.” In fact, in the 20th century Herman Muller contemplated a sort of irreducible complexity—but rather than seeing it as an obstacle to evolution he described it as the expected result of evolution by natural selection: “being thus finally woven, as it were, into the most intimate fabric of the organism, the once novel character can no longer be withdrawn with impunity, and may have become vitally necessary”. Moreover, the irreducible complexity argument ignores the phenomenon of expiation, whereby an already existing trait may change function during the course of evolution.
Applicants for Biology or Natural Sciences (B) should be familiar with both historical and contemporary research on evolution. Those interested in Theology or Philosophy may wish to look into intelligent design as well as arguments from teleology.
Can you picture yourself as a cyborg? Do you yearn to transcend the limitations of feeble flesh? Then you might want to join the transhumanist movement. In his award-winning book To Be a Machine, Mark O’Connell describes the core transhumanist beliefs: “that we can and should eradicate ageing as a cause of death; that we can and should use technology to augment our bodies and our minds; that we can and should merge with machines, remaking ourselves, finally, in the image of our own higher ideals.”
Of course, we do already in some sense “augment” our natural bodies with the use of such things as contact lenses and hearing aids. As technology progresses, more and more people are being fitted with bionic limbs. But so far, these are used as a plan B—an attempt at a replacement for the loss of a natural limb, and certainly not preferable to it. Those who advocate for transhumanism, on the other hand, dream of a deliberate merging of man and machine, and see these artificial body parts as superior. By embracing technology and applying it to our own bodies, they hope to create humans with increased senses, intelligence, strength, and life expectancy.
If this all strikes you as rather dystopian, you’re not alone; many scientists have raised ethical concerns. Blay Whitby, artificial intelligence expert at Sussex University, uses the example of athletes without legs who run on carbon-fibre blades. It is not unlikely, he says, that such athletes will be able to outperform able-bodied runners; would it then be ethical for athletes to deliberately have their legs removed and replace them with such artificial legs in order to beat world records? For Whitby, the idea is repulsive. But others do not see the issue. Cybernetics expert Kevin Warwick protests, “what is wrong with replacing imperfect bits of your body with artificial parts that will allow you to perform better?” Warwick himself has already put his money where his mouth is and implanted electronic devises into his own body. It doesn’t take an expert to jump on the trend, however; several people have had the chip from their contactless card inserted under the skin of their hand in order to go about their day unburdened by a wallet.
Others still are putting their hopes on the future by handing over their bodies to be cryogenically preserved in liquid nitrogen after death, in the hopes of being thawed and awakened at some point in the future when technology has advanced enough to resurrect and enhance them.
Applicants for Philosophy or Theology might like to consider the ethics of this movement; what would be the implications of this new race of superhumans? Is it right to tamper with the natural world (or ‘creation’) in this way? Medics may also want to think about it from the standpoint of medical ethics. Does the natural body have an inherent value, such that it is always wrong to remove a healthy body part?
The idea of there being certain words that simply don’t exist in other languages, and correspondingly, concepts that simply cannot be understood, is a popular one. One common expression of this belief is the claim that the Inuit have many different words for ‘snow’, the implication being that because their culture is so connected to their sub-zero surroundings, they have multiple discrete concepts for something that we can only conceive in very broad terms. This has been disproved, however. The languages spoken by arctic peoples such as the Inuit or the Aleut are highly synthetic, meaning that they have on average a high morphene-to-word ratio. In other words, synthetic languages combine multiple concepts or pieces of information into one word. This is hardly unusual; most Indo-European languages are synthetic, although English and a few others are more analytical. Because of this, more than one concept can be contained within a word unit- for example, ‘fresh snow’ or ‘heavy snow’.
It seems as though we are fascinated by the idea of different cultures possessing exclusive concepts that we cannot access. This is linked to a specific school of thought. Linguistic relativism suggests that the language you speak has an impact on your cognition and worldview; this is also known as the Sapir-Whorf hypothesis, after Edward Sapir and Benjamin Lee Whorf. Although very frequently cited, it is just as frequently criticised and disputed. More broadly, linguistic relativism fits into an ideological progression from enlightenment thought to romantic thought and so on to the present day, with the romantics rejecting the enlightenment notion that language was dependent on, and secondary to, rational thought which reflected reality. Fast forward to 1942, we can see a dramatic reversal in Whorf’s claim that “thinking is most mysterious, and by far the greatest light upon it that we have is thrown by the study of language”.
It must be said that one feature of Whorf’s thought that is often misconstrued by critics is the fact that he was primarily interested in the effect of language on habitual thought. Hence the question is not “does my language possess the capacity to express this concept?” but rather, “does my language lead me to habitually think along the lines of this concept and behave accordingly?” Notably, his analogy of the ‘empty’ gasoline drum was intended to demonstrate that our typical use of the word ‘empty’ influences our behaviour and our underlying assumptions (even when the ‘empty’ drum contains flammable vapour); critics, however, often took him to mean that the English language is simply not capable of distinguishing between a drum filled with air and one filled with dangerous vapour. Perhaps, then, so-called “untranslatable” words should not be judged by whether the concept can physically be conveyed in another language, but rather whether its role in its native language has a discernible effect on habitual thought.
Applicants for Linguistics should be familiar with the concept of linguistic relativism and may like to read the key texts. What side of the argument are you on? Is thought really shaped by language?
Facebook has recently been the subject of criticism and derision for deciding to remove a series of adverts featuring art by the Flemish painter Rubens. The artist, famous for his depictions of particularly voluptuous nude women, is considered a master of the Baroque period; his works were being used as part of an advertisement for the region of Flanders. These were deemed not compliant with Facebook regulations about sexual content. In response, the Flemish tourist board has written to Facebook’s chief executive Mark Zuckerberg with a tongue-in-cheek complaint, even releasing a video making fun of the “nude police”. The social media site is not the first to see a particular sensuality in Rubens’ work; even in a gallery full of nudes his art still has the power to raise an eyebrow, whether in disapproval or amusement, and the 19th-century American artist Thomas Eakins considered Rubens “the nastiest, most vulgar, noisy painter that ever lived”.
This is not the first example of Facebook cracking down on artistic nudity. Notably, a French teacher took the website to court for allegedly taking down his entire account in 2011 for posting a picture of L’Origine du Monde, an 1886 painting of a woman’s genitals. Perhaps the most laughable example occurred earlier this year, when a user was banned from posting a photo of the Venus of Willendorf, a 29,500-year-old prehistoric figurine of a rotund naked woman thought to be an early symbol of fertility. In response, the Natural History Museum of Vienna (home of the Venus) protested in a Facebook post, “let the Venus be naked!”
According to Facebook, the banning of the Venus was a simple error on their part. Nevertheless, questions about what Facebook deems appropriate and inappropriate are particularly relevant now given the highly controversial decisions to lend a platform to the alt-right and to not remove posts denying the Holocaust or a US congressman’s call for the slaughter of all “radicalised” Muslims; meanwhile, a post by activist Didi Delgado stating that “all white people are racist” was immediately taken down and her account temporarily deactivated.
Applicants for History of Art may wish to think about the history of nudity in art and how it has been received. Is Facebook’s censorship valid, or does it represent an over-sexualisation of a woman’s body? How do concepts such as the “male gaze” in art contribute to this conversation? Students wishing to study Politics or those interested in concepts such as freedom of speech should consider the question of censorship and freedom. Do private companies or government agencies have the right to censor opinion, and if so where is the line to be drawn?
Can the so-called Golden Rule be considered a universal, stand-alone ethical code or is morality rather more complicated? This well-known maxim crops us again and again in the writings of different religious traditions: in Confucianism, “what you do not wish for yourself, do not do to others”; in Judaism, “what is hateful to you, do not do to your fellow man”; in Christianity, “do unto others as you would have them do unto you”; in Islam, “no one of you is a believer until he desires for his brother that which he desires for himself”; and so on. At face value, the Golden Rule seems eminently intuitive; one does not need to be a theologian or moral philosopher in order to grasp this principle, and to be liable to judgment according to their enactment of it. Whilst many aspects of morality are still up for debate and may vary across cultures, belief in mutuality seems to be more or less universal.
But can it indeed be applied to everyone? Immanuel Kant’s famous Categorical Imperative states, “act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Whilst this seems very similar to the Golden Rule, Kant claimed that it was in fact superior, partly because the Golden Rule remained hypothetical rather than categorical- “If you want X done to you, then do X to others.” Kant claimed that “if you want X done to you” remains open to subjectivity and dispute. For example, you may well be willing to reject the help of others if it means you don’t have a duty to help them. On the other hand, because of its strict, universalising nature, Kant’s Categorical Imperative cuts through some of the subtleties of moral decision-making—sometimes an option seems to us to be the most ethical and loving in that scenario even though we would not command it in every case.
Some contemporary thinkers argue that the Golden Rule is less useful in our modern age, because easy access to information as well as globalisation and immigration make us increasingly aware of the differences between cultures in terms of ethics and lifestyle. Diverse societies means a diversity of values, which we cannot do justice to by a universal ethical maxim.
Applicants for Philosophy or Theology may wish to scrutinise the Golden Rule, so often taken for granted, and consider whether it can or should be applied universally. Do you think morality is objective? If you had to come up with one ethical rule that everyone had to live by, what would it be?
In The Descent of Man, Darwin wrote: “of all the differences between man and the lower animals, the moral sense or conscience is by far the most important.” Since Darwin’s time, researchers have been looking into the possible origins of our morality do determine whether it is a trait that evolved—and if so, how and why.
Darwin was puzzled by the fact that human beings voluntarily go to war and die for their larger groups, as this doesn’t fit with the idea of natural selection being driven by individuals acting on their own self-interest. He proposed the idea of group selection, according to which a group with more altruists would have more survivors in a war or crisis, thereby passing on the altruistic genes. But the frequency of such events and the force of group selection would have to be enormous for it to override selection between individuals, making this theory unlikely.
Evolutionary anthropologist Christopher Boehm argues that human morality emerged when hunter-gatherers formed groups to hunt big game (about a quarter of a million years ago), and cooperation became necessary for survival. In this type of society where the food source has to be actively shared, alpha male tendencies would have been suppressed and hierarchies eliminated in order to share food evenly. Those who tried to take more than their fair share of meat would have been killed, and hence self-control and the willingness to share would have become evolutionarily successful traits.
Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, has spent years conducting experiments on chimpanzees and human children to compare their social behaviour and cognitive abilities. He argues that human morality is a consequence of our tendency to collaborate more than other apes do. Chimpanzees may be said to have a social nature, with individuals sometimes working together; but according to Tomasello, only humans are “ultra-social”, having developed an enhanced predisposition to cooperation. This is borne out by experiments which shows that human toddlers are much more likely to choose cooperation than chimpanzees and are more willing to share rewards. Like Boehm, Tomasello subscribes to the collaborative hunting theory, but adds that this new food source not only encouraged sharing but led people to view themselves as part of a larger unit—a perspective which he calls “shared intentionality”, and which is behind all human collective projects and cultural institutions. This perspective, he believes, is the root of morality.
Applicants for Anthropology or Biology might want to familiarise themselves with Darwin’s thoughts on social evolution including the evolution of morality, and the subsequent research on it. Do you find the collaborative hunting theory convincing? Students wishing to study Theology or Philosophy may wish to think about the implications of these theories on our understanding of ethics more generally. If altruism is merely the result of certain genetic traits resulting in reproductive success for the individuals possessing them a specific context, is it objectively and universally required of us? In our current society, is altruism still an evolutionarily strong trait or do the ruthless come out on top?
How do we define consciousness, and what would it mean for a bee to be conscious? Recent research is building on our understanding of invertebrate consciousness, and on the nature of consciousness in general. It is established that bees can take in information from the environment and work with that information. The question is whether they can sense their environment from a personal perspective. Most of us would agree that a tree does not have such a consciousness, but a cat does. A cat can not only sense the world around it in an automatic, impersonal manner but has reactions, desires, and preferences based on its environment, as humans do. In this way we share at least a basic level of consciousness, although humans may well be the only animals that are aware of being aware.
Consciousness is difficult to diagnose, however, since it relies on interpreting outward manifestations of a presumed inner state. We deduce that a cat is angry because it reacts rather like we do when we’re angry. In fact, we cannot even be totally certain of the consciousness of another human being—we just assume it based on the fact that we can relate to their behaviour. But this is very difficult to do with a creature like a bee; we cannot read a bee’s facial expressions and body language as being similar to our own.
Hence, a new form of research is attempting to analyse the neural basis of consciousness, which can then be compared across species. Neuroscientist Björn Merker argues that the capacity for consciousness in humans depends on the evolutionarily old core of the brain known as the midbrain. Self-awareness, on the other hand, would depend on the younger neocortex which surrounds the midbrain. The midbrain is thought to be important to awareness because it holds together knowledge, desire, and perception in one decision-making centre; this first-person perspective was vital for the survival and evolution of animals.
Although insect brains are tiny, research has shown that they perform the same functions as the human midbrain by tying together memory, perception, and needs. Moreover, this integrated system has the same function as the midbrain, namely to facilitate decision-making. Hence, there is good reason to believe that insects are ‘conscious’ as we would define it. You might think twice about swatting that bothersome insect from now on.
Applicants for Biology or Natural Sciences may wish to learn more about consciousness from an evolutionary perspective, with reference to the bee as an example of contemporary research. Applicants for Philosophy might want to think about consciousness more broadly; can we ever be truly sure that those around us are conscious as we are, rather than just a computer simulation? Could we ever invent a machine that was conscious? Students interested in ethics could consider the implications of research into consciousness on issues such as animal rights.