New Dark Age
Book Author | |
---|---|
Published | July 17th 2018 |
Pages | 304 |
Greek Publisher | Μεταίχμιο |
Technology and the End of the Future
What’s it about?
New Dark Age (2018) investigates the fundamental paradox of our digital age: as new technologies allow us to gather more and more data on our world, we understand less and less of it. Examining the history, politics and geography of the complex digital network we are enmeshed in, James Bridle sheds new light on the central issues of our time, from climate change to wealth inequality to post-factual politics, and explains how we can live with purpose in an era of uncertainty.
About the author
James Bridle is an artist, publisher, and writer on technology whose work has appeared in the Guardian, Wired, Frieze, Observer, Atlantic and many other publications. New Dark Age is his second book.
Basic Key Ideas
From social media addiction to fake news to mass surveillance, new technologies have changed our lives, our societies, and even our planet – often, in ways we hadn’t initially anticipated.
Once hailed as the harbingers of a new enlightenment, the internet and other important tools of our networked world seem to have engendered new genres of social and political division, violence and abuse, misinformation and conspiracy theory. Amidst a sea of information, we seem to be plunging into a new dark age: a period where we are able to gather more and more data on our complex world, and yet seem to understand less and less of it.
Now more than ever, we need to learn to think critically through all the uncertainty. We need to investigate the technologies that shape our world and our thinking, and examine where they came from, how they function, and who they serve. These blinks will lay bare some of the vast and unexpected ways that new technologies affect us – and why and how they came to do so.
In these blinks, you learn about
- the military project that spawned the computation age;
- the rationale behind conspiracy theories; and
- the seedy underbelly of YouTube children’s entertainment.
What do computers have to do with the weather, and what does the weather have to do with the military?
Well, everything. For decades, devising methods to predict and control the weather was a chief concern for Western armies – and in that project lies the origin of modern computation.
The first person to make calculations on atmospheric conditions in order to predict the weather was mathematician Lewis Fry Richardson. This was during World War I, when he was volunteering as a first responder on the Western Front.
Richardson even came up with a thought experiment that could be conceived as the first description of a “computer”: he envisioned a pantheon made up of thousands of human mathematicians, each calculating the weather conditions for a particular square of the world, and communicating the results between one another to make further calculations. Such a machine, Richardson dreamed, would be able to accurately predict the weather anywhere, at any moment in time.
His futuristic idea didn’t come into view again until World War II, when big military research spending spurred the advent of machine computation. The Manhattan Project, a US military research project that led to the creation of the atomic bomb, is closely linked to the development of the first computers. Many of these first computers, such as the Electronic Numerical Integrator and Computer (ENIAC) from 1946, were used to perform automated calculations to simulate the impact of different bombs and missiles under certain weather conditions.
Often, however, the military origins and purposes of the computers were concealed.
In 1948, for example, IBM installed its Selective Sequence Electronic Calculator (SSEC) in full view of the public in a shop window in New York. But while the public was told the computer was calculating astronomical positions, it was actually working on a secret program called Hippo – carrying out calculations to simulate hydrogen bomb explosions.
From the beginning, the complex, hidden workings of computers provided a convenient cloak for obfuscating their actual functions.
Most of the time, though, they didn’t even carry out their actual functions all that well. The history of computation is full of anecdotes that illustrate how computers’ oversimplified view of the world, their inability to distinguish between reality and simulation, and bad data can have serious consequences for their human users. For example, the US computer network SAGE, which was used to integrate atmospheric and military data during the Cold War, is infamous for its near-fatal bloopers, such as mistaking a flock of migrating birds for an incoming Soviet bomber fleet.
Climate change is what philosopher Timothy Morton calls a hyperobject: like the internet, it’s so vast and pervasive a thing that we simply cannot think of it in a meaningful way. Instead, we just witness its imprints on the world around us.
One such dramatic imprint is the Syrian conflict of recent years, described by many observers as the first climate war in history. Due to rising global temperatures, between 2006 and 2011, the Syrian countryside suffered massive, unprecedented droughts. Huge swathes of farmland became unusable, and nearly 85 percent of livestock died. The resulting demographic pressure of farmers fleeing to the cities, and built-up resentment about President Bashar al-Assad’s handling of the situation, finally resulted in the armed conflict that became palpable to the West as the refugee crisis.
But it’s not just ancient technologies like agriculture that are affected by changing weather conditions. New technologies, like the internet, are also affected by climate change. Even though we tend to think of the World Wide Web as a non-physical “cloud,” data transmission and storage relies on an extensive physical network of fibre-optic cables, antennas, and servers – an infrastructure highly vulnerable to extreme weather conditions. The strength and effectiveness of WiFi, for example, is known to decrease with higher temperatures, and many computational devices fail completely in extreme heat.
Conversely, digital technologies contribute to the climate crisis, too. The world’s physical data centers alone use about 3 percent of the world’s electricity, accounting for about 2 percent of global carbon emissions.
As our digital culture becomes faster, it will require still more resources to maintain these data centers. If we consider the energy costs for the storage and transmission of streaming data for one hour of Netflix a week, it consumes more electricity annually than two new refrigerators. It’s no wonder then that the amount of energy used to store and transmit our data is expected to triple in the next four years.
And while new technologies allow us to collect huge amounts of measurements and data on the crisis, climate change itself might make us literally unable to integrate all this information in a meaningful way. In 2015, atmospheric carbon passed 400 ppm, or parts per million, of the atmospheric makeup. At 1000 ppm of CO2 – a ratio some indoor areas in urban centers regularly exceed – human cognitive ability drops by 21 percent.
If you’re a little familiar with the world of computation, you’ll have heard of Moore’s law: the idea that the raw computing power of our devices doubles every two years. Since its first formulation in 1965, this law has held approximately true. But as you can probably attest from casual observation, the fact that technology is getting ever smaller and faster doesn’t necessarily mean that it’s making our lives any easier.
Tech optimists who cite Moore’s law usually do so in the spirit of computational optimism. They believe that more computation is always better, and that the more data we collect and process of the world, the more our understanding of it increases.
In science, this belief has led to a research system that values automated testing that generates tons of data over more messy human empiricism. In drug research, for example, the role of human scientists is now often reduced to programming and overseeing the work of machines engaged in a process called High-Throughput Screening. The computer tests the effects and interactions of thousands of chemical compounds a day, in the hope that it will eventually stumble upon a combination that is useful in treating a particular disease.
The problem is, this approach doesn’t seem to work at all. Every nine years since the 60s, the number of new drugs approved for human use per billion US dollars in spending on research and development has halved. Commentators have cynically started calling this effect Eroom’s law – Moore’s law spelled backward.
The big data fallacy that encourages quantity over quality is palpable in all of science. While the number of scientific studies, journals, and papers has been steadily increasing over the past decades, so has the number of mistakes, plagiarism and fraud in scientific research. Experts are increasingly talking about the replication crisis of modern science. This refers to the fact that when many scientific studies are conducted a second time by a different group of researchers, they cannot reproduce the original results.
In 2011, for instance, the University of Virginia reran five landmark cancer studies of recent years. Only two experiments could be successfully replicated; two others were inconclusive; and one failed completely.
Even as scientific research is gathering more and more data about the world, the pace of scientific discovery is actually slowing. Instead of endowing us with a better grasp on the world, the current overflow of information is negatively affecting our ability to process what’s going on around us.
On the surface, Slough is a pretty unremarkable, small town some 25 miles outside of London. Unbeknownst to most, however, the many vast and anonymous warehouses that line its main road form the physical base of some of the most important parts of our digital world. For example, one of them, given the unassuming name LD4, houses the data servers of the London Stock Exchange.
The fibre optic cables that lead to and from LD4 carry financial information of almost unimaginable value, transporting and receiving it to and from other financial data centers around the world at the speed of light. This super-speedy network between companies, investors and markets has given rise to a new type of financial exchange: high-frequency trading.
Today, financial traders can react almost instantly to drops and spikes in the market. To do so in a matter of milliseconds, they enlist the help of algorithms and bots that monitor prices, make mock offers and shadow transactions to confuse other traders, and even scan and interpret news headlines to anticipate the economic effects of major events around the world.
But it’s becoming more and more apparent that even insiders can’t keep up with the logic of their computers in the hyper-accelerated world of digital finance. For instance, on May 10, 2010, the Dow Jones experienced an unprecedented 600 point crash – an equivalent of six billion dollars lost – then suddenly recovered minutes later. Such flash crashes are growing more common, and no human is able to pinpoint what exactly causes them.
While machines increasingly confuse humans in some areas, they replace us outright in others. Just take behemoth Amazon, which is already using fleets of robots to store, sort, and pick out products. Where it still uses human “pickers,” Amazon does so out of financial incentive and essentially treats those workers like robots: The workers are guided and monitored via a handheld device, which sends them to different locations in the warehouse in a manner that maxes out efficiency and minimizes socializing with coworkers.
Worryingly, politicians and companies are providing little perspective on what social security system could replace full-time employment. And so technology, far from being the great equalizer that we’ve been promised, is just another tool for concentrating power in the hands of the few.
There’s a lore about an AI built by the US Army that illustrates the dangers and limits of machine learning, by which is meant: teaching computers how to think.
Allegedly, the army tried to train a computer to recognize camouflaged tanks in a forest. To do so, they presented it with picture after picture of forests with tanks hidden in them, and picture after picture of forests with no tank, until the AI had learned to tell the test images apart perfectly. But out in the field, the AI failed completely. It was no better than a human at guessing whether there was a tank in a particular forest or not.
Someone noticed only later, that all the training photos with tanks had been taken on a sunny day, and all those without a tank had been taken on a cloudy day. The machine hadn’t learned the difference between a forest with a tank, and a forest without one: it had learned the difference between a sunny and a cloudy day.
This story shows us that when we train machines to think, we cannot expect them to think like us. In many cases, we might never be able to understand how or why they reached their conclusion at all. Computers build their own multidimensional simulations of the world that are entirely different from our human experience of it.
But more than just an idea worth considering, the mysterious nature of a machine’s mind can also be used to justify the conclusions they come to, even when these are controversial or dangerous.
In 2016, two researchers from a University in Shanghai made a stir when they claimed to have developed software that could tell the difference between a criminal and a non-criminal face. When their experiment was criticized on the basis that the software would surely over-represent marginalized communities, they claimed that they had constructed it purely for academic purposes, and that it, and machine learning in general, was inherently “free of bias.”
The idea that algorithms and computation are unbiased is shared by many AI enthusiasts. What they fail to acknowledge is that machines tend to be trained with data, and the only data we have is of our past. Since our past is rife with violence, injustice, and racism, whether we intend to or not, the machines we train with this data are going to replicate that violence, injustice, and racism and project them into the future.
As recently as a few years ago, for example, Asian-Americans tried in vain to take family photos with their Nikon Coolpix S630. Instead of taking a picture, the “smart” camera repeatedly displayed the error message “Did someone blink?”
In a previous blink, we considered the hydrogen bomb simulation computer, SSEC, which conducted its top-secret calculations in full view of the public.
Ever since World War II, intelligence agencies such as the CIA and NSA have spent millions of dollars on developing secret technologies, the existence or true purpose of which the world only becomes privy to decades after the fact – if ever. For example, it was the CIA, not the US Army or Air Force, that developed the first drones, years before they became a staple in modern warfare.
However, it’s not only futuristic technologies that are stowed away in a classified world. Huge chunks of our history are progressively disappearing into secret vaults. The US government marks about 400,000 new documents every year as top secret – a number that’s rising steadily.
The situation in the UK is not much better. In 2011, when a group of Kenyan survivors finally won the right to sue the British government for torture they had endured under colonial authorities in the 1950s, it came to light that around 1.2 million documents on the British concentration camps in Kenya were locked away in a secret government facility. Of those documents, many were “destruction certificates” of other documents, attesting to an even bigger number of missing records and erased history.
This example tells us that even though we now have more supposedly neutral data on the world, what reaches us is still culled and controlled. In the case of the Kenyan concentration camps, the concealment and suppression of important historic documents by the British government has effectively prevented the country from appropriately reckoning with its colonial past.
Another way intelligence agencies control the world around us is through data collection. The NSA’s extensive system of mass surveillance came into full view with the revelations of whistleblower Edward Snowden in 2013. And a few months later, similar programs to spy on the communication of regular citizens were uncovered in all other major countries in Europe and the Americas.
The public outrage cooled quickly, though, and the Freedom Act passed by the United States in 2015 as a response left the NSA’s surveillance rights largely intact. Much like climate change, mass surveillance seems simply too vast and complex a threat to think about.
Since the beginning of history, humans have been inclined to spin complex events into simple stories to make sense of the world. In a way, our conception of history is, itself, an example of oversimplification.
Of course, none of these narratives can ever encompass the full, multidimensional truth. But in our networked, information-saturated present, many stories people tell themselves about the world seem further off the mark than ever.
Chemtrails are among the oldest and most pervasive of conspiracy theories that crop up in all corners of the internet today. Proponents of this theory believe that there is a network of commercial or military planes spraying chemicals in the air in order to cause diseases, mind-control people, or execute some other diabolical plan.
Of course, human-made chemical clouds are very real. The chemtrails that people observe across the sky are actually exhaust fumes and condensation from the planes. But far from being concerned with the real threat of the carbon emissions caused by the aviation industry, chemtrailers literalize their general anxiety into a neat and arguably unhinged theory of governmental mind-control.
Another group of conspiracy theorists believe themselves to be subject to “gang stalking,” surveillance and mind control by nefarious entities. Considering what we now know about the NSA’s system of mass surveillance, this basic perception is not too far removed from what’s actually going on. But the conspiracy theorists’ simplified version of the story – the one that paints a clear black-and-white picture of the world and involves them personally and directly – still seems easier to grasp than the existence of a vast global system of mass surveillance that has no clear perpetrator or purpose.
Of course, the internet’s echo chamber effect aids the proliferation of outlandish theories about the world. Aspiring conspiracy theorists are easily drawn into interactive, supportive, and self-confirming online communities that lead them to ever more extreme views.
Right-wing populists and religious fundamentalists exploit our desire for simple narratives in a complex time. Donald Trump himself has tweeted about how climate change is a conspiracy against American business, manufactured by the Chinese. And many of his campaign promises, such as the border wall to Mexico, were clearly inspired by prominent online conspiracy theorist Alex Jones of the website Infowars.
Though conspiracy theories provide the comfort of reducing our frightening chaotic world into a simple narrative, these can turn out to be just as frightening.
Have you ever fallen into a YouTube spiral? Clicking from one suggested video to the next, you find yourself in ever-weirder corners of the video platform that barely classify as human entertainment.
One reason why there are so many vloggers, self-made entertainers, and increasingly, bots vying for your attention on YouTube, is that successful videos can make a lot of money from advertising. The music video for the viral Korean pop hit “Gangnam Style,” for example, earned eight million dollars from its first billion views.
Children’s entertainment has been proving a particularly lucrative sector of the platform. Children as young as two are spending more and more time online. With loud, colorful videos, which they often watch over and over again, they’re easy to target and engage.
Many of these so-called children’s videos are made by bots created by companies looking to make a quick buck. One of YouTube’s most successful channels, for example, Little Baby Bum, has churned out thousands of bot-created animated sing-along videos, all following the same basic melodies and patterns.
Often, these companies use algorithms to capitalize on YouTube’s own ones. The result is nonsensical titles such as this: “150 Giant Surprise Eggs Kinder CARS StarWars Marvel Avengers LEGO Disney Pixar Nickelodeon Peppa.”
But besides the blatant copyright infringement of these videos, their content can be downright terrifying. One example, from a list of millions of other, similar videos is titled, “Wrong Heads Disney Wrong Ears Wrong Legs Kids Learn Colors Finger Family 2017 Nursery Rhymes.” In it, the detached heads of characters from Aladdin are floating around the screen. When a head attaches itself to the right body, the little girl from Despicable Me appears in a corner of the screen and cheers. When the head does not match the body, she lets out a brief, automated wail.
Perhaps even more worrisome is that YouTube’s suggestion algorithms can’t distinguish between real kids’ shows and videos meant to parody them, the latter of which can be violent. In one, beloved animated character Peppa Pig, is shown going to the dentist, who then tortures her by ripping out all her teeth.
Most of these videos aren’t targeted at children – but with little to no content or age control by YouTube itself, they inevitably do. As a result, the fateful combination of capitalist incentives and predatory algorithms aided by technology, has engendered a whole new kind of systematic violence.
At Google’s 2013 Zeitgeist conference, an annual gathering of tech elites and politicians which discusses the state and future of technology, CEO Eric Schmidt made a startling claim. If camera phones had been around in 1994, the horrifying Rwandan genocide of that year would never have happened, because people would have been able to film and share news about the atrocities taking place.
Schmidt’s idea is rooted in a belief shared by many of his peers: that making something visible automatically fixes it, and that new technologies are making the world a better, safer, and easier-to-manage place. As we’ve seen in the previous blinks, this couldn’t be further from the truth.
Just take a closer look at Schmidt’s example of the Rwandan genocide. In 1994, over the course of 100 days, an estimated one million Rwandans were murdered in a brutal massacre spurred on by inter-ethnic tensions, while the rest of the world stood by and did nothing – supposedly, because they didn’t know about it.
But since then, investigations have revealed that several NGOs, foreign embassies, and the UN were closely monitoring the situation. The US government, for example, was tracking the developments via high-resolution satellite pictures. Contrary to what Schmidt suggests, the genocide in Rwanda was not abetted by a lack of knowing, but instead by a lack of doing.
Apathy and inaction in the face of overwhelming information is now a state familiar to all of us. And so, rather than helping us make sense of the world, data and computation have only made things more complicated.
British mathematician and architect Clive Humby was hinting at the drawbacks of computation when he coined the phrase “data is the new oil” in 2006. In his original statement, he went on to explain that like oil, data can’t be used in its unrefined state. In order to be of value it needs to be broken down and analyzed.
Instead of focusing on collecting progressively more data in order to predict increasingly complex events, we need to learn to think consciously and critically about where our data is coming from, what it’s being used for, and who owns it. We need to closely examine the global technological networks that produce and use this data, and the ways we can change them for the better. This is the only way we can bring meaning to this new dark age of our making.
The key message in these blinks:
While new technologies of the digital age allow us to connect, collect and share information, they’re ushering us into a new dark age, where the world seems more complex and confusing than ever before. This is because, as examples from early computing, history, and science demonstrate, more data doesn’t always produce better results. Moreover, when new technologies are used for capitalist aims, they tend to perpetuate and deepen existing power structures. That’s why, if we want to live meaningfully in the present, we need to start questioning the origin, function, and purpose of our technologies.
What to read next: Tools and Weapons, by Brad Smith and Carol Ann Browne
Ranging in topics from climate change to mass surveillance to child abuse, New Dark Age revealed some of the darker sides of the digital age.
If you’re interested in hearing more ideas on how we can manage the threats that some of our new technologies pose, we recommend Tools and Weapons. Microsoft insiders Brad Smith and Carol Ann Browne offer unique insight into the promises and dangers of digital technology, and explain how tech companies and governments can work together to ensure that the good outweighs the bad.
For an insider scoop on how technology shapes our world, head over to our blinks for Tools and Weapons.