WhatsApp and other social media platforms have often been criticised before in the past because users are obliged to use one of four or five distinct skin tones that do not adequately cover the richness of human diversity.
2019 sees WhatsApp listen to the criticism and progress towards being more encompassing. With the new updates, people with disabilities will have more representation than before with options for hearing aid and deaf person emojis, probing case and service dogs. There are also options for people with mixed skin tones in manual and motorised wheelchair.
The LGBT community is also receiving a positive update that encompasses the diversity of interracial relationships that have hitherto been ignored.
It’s also pleasing to see more international communities being represented with Sari and Hindu temple emojis being in the roster.
It’s clear the WhatsApp labs have been keeping an eye on food trends as the milennial’s favourite snack falafel and the health conscious tea lover’s maté have also been included.
Researchers in Edinburgh have discovered that the option to include a diverse array of skin tones is one that social media users want to engage with. They collected a billion tweets and discovered that users often use the skin tone that correlated with their real skin tone.
It’s very clear that diversity and representation on social media is an important concern for people. Computer Scientists may profit from delving into the Edinburgh study and understanding their experiment. Those students going for HSPS, Human Science or Anthropology may want to look at social media representation through academic social lenses.
Thanks to artificial intelligence, the extraordinary powers of Dr.Dolittle may soon be imaginable for the rest of the world – at least if speaking to mice and other rodents is your ultimate goal. Scientists have developed an app called ‘DeepSqueak’ which analyses the speech patterns of our cheese loving friends.
‘DeepSqueak’ analyses the sounds of mice in order to understand the association between the variety of ‘squeaks’ and different mousey emotions and behaviours. Mice communicate at a pitch that is beyond the average human sense of hearing and so the app converts these high frequency audio signals into visual sonograms that can be interpreted and studied.
Scientists are learning that mice produce happy squeaks just before being given a sugary treat and when they are playing with others. The romantics of you will rejoice to hear that male mice also know how to woo their female counterparts with special squeaks of courtship.
Other than the incredible novelty of understanding the language of mice, there are some very important advantages of being able to make sense of squeaky sonograms. Language or communication is a gateway to the internal mind of any organism, and scientists will be able to understand more clearly the effects of drugs and medications being tested on mice.
For those who are eager to try this out on their gerbil at home, you will soon be able to get ‘DeepSqueak’ on Github to take your relationship with your beloved pet to the next level.
Computer Scientists, Linguists and Biologists should definitely explore this interesting field of AI-assisted animal communication to really individuate yourself in your personal statement and at your interview.
It’s time to squeak your way to an Oxbridge offer!
There is something about spectacles that seems both timeless and retro – after all they have been around for many, many years relatively unchanged. The only recent leap-forward in eye-wear tech was the Google Glass which was a much-hyped launch but ultimately a flop. As a wearer of spectacles myself, it was with some excitement that I noted a new innovation from a company called TouchFocus; in short, they are launching spectacles that allow users to switch focus with touch of the finger. For people suffering from presbyopia (natural loss of ability to focus on nearby objects that occurs with age), a single pair of glasses or contact lenses can no longer provide them with clear vision at all distances. Although glasses with progressive or multi-focus lenses, such as bifocals or trifocals, have been developed, many people have problems getting used to these lenses because of their restricted field of vision. As such, this novel tech is squarely aimed at this market segment, which generally skews to an older population.
So how does it work? Well at first glance, TouchFocus appears to be simply a pair of stylish spectacle, but hidden inside the frame is an electric circuit. With a touch to a sensor installed in the temple, the liquid crystal lenses are activated which allow the eye-wear to change focus from distance to close by instantaneously. The product is powered by a long-lasting, easily chargeable battery.
Sounds simple enough right? I liked this example of product innovation as this would be a perfect example of a product article that an interviewer might ask you to review and comment on during an interview, even for someone planning to focus on the biological sciences. While you might have no idea what “liquid crystal lenses” are, you should be able to leverage your chemistry and physics background to at least put forward a hypothesis as to how these lenses might work. At the very least, all students should have a good working knowledge of the human eye and how the lenses in our eyes are able to refocus light by changing shape via manipulation of surrounding muscles. From that starting point, could you come up with a sensible idea? Spend some time thinking through possibilities before heading over to the company’s website to learn more about how it works. Can you think about other applications where this technology could be applied?
The British-Lebanese mathematician Sir Michael Atiyah spoke at the Heidelberg Laureate Forum on 24th September. In a 45 minute talk he claimed to have found a “simple proof” to the Riemann hypothesis, a problem that has remained unsolved since 1859. Correct proof to support the hypothesis, labelled by the Clay Mathematics Institute as one of the seven “Millenium Prize Problems”, could have huge implications for the majority of modern day cryptography, including cryptocurrencies like Bitcoin.
The hypothesis concerns prime numbers and the ability to find the number of primes smaller than any given integer, N. The hypothesis relies on the Riemann zeta function and its return of zero. It is known that zero is returned when a negative integer is used in the function, these are known as trivial zeros. This is also the case with any complex number whose real part is ½, known as non-trivial zeros. The non-trivial zeros have varying imaginary units but consistent ½ values, allowing for their calculation.
However, the continuing problem is the lack of proof that complex numbers with a ½ real value are the only form of non-trivial zeros. Until now this has been assumed to be true and has provided the basis for modern cryptography. This is due to the property of prime numbers where calculating the product of two primes is simple but finding the two primes used when the only information given is the result is very difficult. This allows for one-way functions that cannot be easily inversed by those that are not the intended recipient.
But proof of the hypothesis may lead to a connection being observed between prime numbers and this could be exploited to counteract contemporary forms of coding. This could lead to hash algorithms being easily hacked and bitcoin being mined at an exponentially faster rate.
Atiyah’s claim of proof is currently met with scepticism but if correct his work could have an enormous impact. Computer Science and Mathematics applicants can develop their understanding of the Riemann hypothesis and its underpinning of cryptography and its other applications.
Can robots make art? The annual robotic art competition is now in its third year, with over 100 submissions this time round and a first place prize of $40,000. The founder of this competition Andrew Conru argues the idea of using robots in the artistic process is no different than any other advancement in artistic technique or genre. “Every generation tries to come up with a new genre, a new style, a new category of art. I don’t see robot art as any different than yet another way for people to express themselves.”
The term ‘robotic art’ covers a range of categories. Some of the submissions to the competition directly involved humans in the creation of the piece itself, for example where a robotic tool was wielded by a human controller, whereas others relied more fully on artificial intelligence. The 2018 winner Pindar Van Arman used what is known as ‘deep-learning’—a type of machine learning that differs from task-specific algorithms and aims to more closely mimic the brain’s neural networks—in order to create “increasingly autonomous generative art systems”. The third place winners focused on technique, training their robot to record and precisely mimic the exact motions and pressures used by a brush to paint a specific work of art.
Not only are robots becoming artists, they’re also giving art critics a run for their money. Berenson, a dapper-looking figure in a bowler hat and white scarf, uses his robotic facial expressions to react to art around him based on analysing the reactions of other museum-goers. On the other end of the critical spectrum, Novice Art Blogger is an automated blog that processes and attempts to analyse abstract art based on deep-learning algorithms. The blog’s descriptions of art are almost more abstract than the art itself, and it is often hard to see how it reaches its conclusions. For example, Stringed Figure by Henry Moore is described as “a close up of different pieces of a paper bag or rather a piece of wood cutting out of a box with a pair of scissors in front of it. It stirs up a memory of a cake in the shape of a suit case”. Perhaps we mere mortals are simply not on this critic’s level…
Van Arman wonders whether his AI system is “simply being generative, or whether the robots were in fact achieving creativity.” Applicants for Computer Science, Fine Art, or History of Art may wish to ponder this question—can AI ever be creative? In fact, are humans truly creative or do we, like robots, simply analyse, process, and generate based on our own neural networks?
The Chinese government plans to launch its Social Credit System in 2020. Many recent news reports have looked into exactly what this means for the inhabitants of China and whether it is a force for good or bad.
The scoring of this system is fundamentally wrapped up in the increasing technological world that China has come to represent.
The online website ‘Wired’ has explained that ‘The Chinese government is pitching the system as a desirable way to measure and enhance “trust” nationwide and to build a culture of “sincerity”. As the policy states, “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”
The result of this social credit system and the rating an individual is given could then be used by employers to decide whether to offer you a job, by banks to decide whether to give you a loan, or even by prospective partners.
However, many other news outlets have argued that this is draconian and poses a threat due to being far too pervasive into individual liberties. Students that are hoping to study HSPS may want to look into the tension and judgement shown by western news outlets on their opinion into these changes within Chinese society. Furthermore, are these comments truly reflective or understanding of Chinese society and showing it in a truly holistic picture, or merely placing western views onto the new system?
Students hoping to study Sociology or Computer Science may want to think more closely about the potential effects, risks and potential power that technology can have in the 21st Century world. China has shown control over what its citizens see (with the banning of sites like Facebook, YouTube, Google and Twitter) and also know is using forms of technology to track and monitor citizens thoughts and movements.
Since the recognition of Estonia’s independence in 1991, a flurry of tech savvy entrepreneurs have marked out this small Baltic nation as a hotbed of innovation. Internationally recognised startups like Skype, Transferwise and Pipedrive have all capitalised on a post-Soviet education legacy and skills surplus in an openly governed state with a predominantly Nordic ethnic identity in contrast with its Slavic neighbours.
Toomas Hendrik Ilves, Estonia’s erudite president until 2016, spent a decade fostering the political landscape necessary for Estonians to embrace the digital age, compensating for its tiny workforce and lack of infrastructure. Today Estonia boasts the most technologically advanced system of government in the world, a decade of free public Wi-Fi outdoors, 94% of tax returns completed online within 5 minutes, internet banking for 97% of its customers, and all but paper-free offices nationwide.
This incredible efficiency also brought a new and unprecedented vulnerability, exemplified by the 2007 attack on Estonia’s digital banking system allegedly mounted by Russia. This has since been described by military historians as the world’s first cyber war. The ensuing decade of cyber conflict has culminated in the propagation of ‘disinformation’ as seen in the fake news and social media posts following last year’s presidential election. As a strategy developed and codified by Estonia’s old overlord where, according to Ilves, truth has been devalued and that “all truths have become equivalent”, Estonia has in contrast emerged as a significant player in the battle against disinformation and broader cyber warfare. Accepting the dire reality that this new warfare is a permanent fixture in a connected world, the private companies and government cyber labs of Estonia and the US will have to form part of the front line defences of democratic countries.
Students with an interest in studying Computer Science or HSPS should research the political debates surrounding the role of tech and social media in journalism and particularly the coverage of elections and campaigns.
Normally when we refer to addiction, we are talking about substance addiction or gambling, but in recent years, that definition has broadened to include many other addictions, such as social media. As well as the countless studies showing our increasing use of social media and technology, anecdotally, the way in which we reach for our phones first thing in the morning, and scroll endlessly through social media feeds during the morning commute demonstrates this addiction we have to social media.
Now, Sean Parker, former Facebook President has admitted that Facebook was designed to exploit vulnerabilities in human psychology, namely through making users crave social validation through likes and comments. Many drugs target the brain’s reward system, causing a release in the neurotransmitter dopamine, leading to feelings of pleasure and euphoria. This feeling then reinforces use of drug, contributing to a cycle of addiction. In the same way, Parker has explained how likes and comments on Facebook are designed to give users a “dopamine hit” driving their attention on the app, and contributing to a social-validation feedback loop where users continue to post content, in order to receive validation.
Psychology students can consider the structure of reward pathways in the brain and how changes in neurotransmitters can drive behaviour and lead to addiction. Computer Science students could look at the nature of the algorithms that drive the content we see most of on our newsfeeds, and how this can change the user experience, whilst Economics students could consider the financial benefits to a company in ‘hooking’ users to an app, to ensure that they continue to use it.
It has long been understood that dolphins are intelligent, social creatures, and that they have their own distinct language that humans can’t understand. Although we can distinguish the different sounds that dolphins make, and understand that each dolphin has a unique call, scientists face struggles when trying to study their communication, due to the difficulty in tracking which dolphin is making which sound and why. However, recently, psychologist and marine mammal scientist Diana Reiss and a group of biophysicists have built a ‘dolphin touchscreen’ in the form of a window into the wall of a pool at the National Aquarium in Baltimore. The researchers project interactive progammes onto it, and optical sensing technology can detect when the window is being touched by the dolphins. The project was inspired by an experiment Reiss conducted in the 1980s with an electronic keyboard with unique symbols on each key. Each key made a dolphinesque whistle when touched, with the idea that dolphins could use the keyboard to make requests of their handlers. When listening to recordings, Reiss noticed that the dolphins were mimicking the sounds made by the keyboard and combining with their own unique sounds.
One of the programmes the team have developed is a dolphin version of ‘whack-a-mole’. In the game, fish swim across the scream and disappear when touched. Within seconds of the screen turning on, the scientists witnessed the dolphin approaching the screen and touching the fish with his melon, or forehead. Motivated by this success and with the 1980s experiment in mind, the team are now developing an app similar to the keyboard. Alongside this the team will use microphones embedded in the walls to record the sounds, and multiple cameras to track the locations of the dolphins. The combination of audio and visual data the team will be able to trace the sounds back to a particular point in the pool and thus a specific dolphin. Data-mining algorithms will then be used to look for patterns in this information.
Psychology, Biology and Veterinary students should explore our understanding of animal intelligence and consciousness and how technology is allowing us greater insight into this. Physics students can consider how such technology may help us communicate with potential extre-terrestrial life forms as we continue to pursue space exploration. Computer Scientists and Mathematicians can investigate the nature of the programmes and technology used to pursue this research.
With the recent release of the sequel to Blade Runner, starring Harrison Ford and Ryan Gosling, the debate around the relationship between computers and humans has become a centre point of the later stages of the technological revolution.
Scientists and non-scientists alike for a long time focused on computers useful ability to shift through large amounts of data in a short time.
However, recent research fields have not only reached that wall, but have moved past it – prompting questions about artificial intelligence’s capacity to learn more ‘human tasks.’
An article on the BBC News website, by a professor at Dundee University poses: ‘But what if AI were able to handle the most human of tasks – navigating the minefield of subtle nuance, rhetoric and even emotions to take us on in an argument?’
In particular the article focuses on how robots could form an argument. This is a field known as argument technology.
Students looking to read computer science may want to look more closely at how the development of robots has passed through each step of development, and reached this point.
The most recent advances have been based within the increase of the amount of data available to train computers in the art of debate. This means in the future, we may be able to train A.I. to give persuasive arguments, and the a new generation of lawyers!
Students hoping to study linguistics, philosophy or computer science may similarly look into how the human form of argument is relatable to the human invention of computers.
Students who are looking to study the field of law may want to look into how the law will react to regulate this kind of sociological and technological revolution, if it is able to actualise.