Tuesday, October 15th from 6 PM, Juju’s Bar and Stage, Ely’s Yard, 15 Hanbury St, London E1 6QR.
The Green New Deal is a term that has been thrown around by policymakers both in the US, Europe and in the UK. But what is the Green New Deal, and what are the policy implications of it? How far must British and European policymakers go in order to reduce their emissions by 2030? What industries will die down in this process, and who is this affecting? Is it feasible, both in an economic and political perspective, that politicians and policymakers will pursue a Green New Deal? Are there security implications for restructuring our economic policies to fit the new green policies? Are there security implications if we don’t?
SPEAKERS
Dr Leslie-Anne Duvic-Paoli is a public international lawyer, with expertise in international environmental law and climate and energy law, based at King’s College London.
Dr Simon Chin-Yee is also based at King’s College London, in the European Centre for Energy and Resource Security (EUCERS) in the War Studies department.
Christopher Barnard is the founder and president of the British Conservation Alliance, an organisation working to promote pro-market environmentalism and conservative conservation.
Peter Apps has been the Executive Director of PS21 since 2015, and is a Reuters global affairs columnist.
James Rising is an Assistant Professorial Research Fellow at the Grantham Research Institute at the LSE.
Alex Chapman is a consultant at the New Economics Foundation, with experience in qualitative and quantitative research, project evaluation and policy analysis.
GDPR notice: By signing up for this event, you are giving PS21 consent to share your details with the venue for security purposes. We will also add you to our events mailing list, from which you can unsubscribe at any time. If you have any queries or would prefer not to be added, please contact ps21central@gmail.com
Over the past two years, a wave of disinformation campaigns has upended democratic electoral systems across the globe, prompting both governments and electorates to demand action to counter the growing prevalence of fake news. So far, several governments have begun enacting laws to address the issue, from Malaysia’s anti-fake news bill passed in April to French President Emmanuel Macron’s advocacy for legislation criminalizing falsified content.
While these clampdowns are highly visible, these responses to a growing and diffused threat from falsified content essentially amount to knee-jerk attempts to declare the practices criminal. Along with the potential to severely restrict free speech through claims of fake news, these laws do not address the underlying factors that enable fake news campaigns to be successful in the first place, such as poor digital and information literacy among the general public.
Likewise, criminal legislation alone will not equip governments or the public against the next wave of disinformation threats derived from emerging technologies, such as “deepfakes.” In order to effectively respond or even counter this threat, more attention must be directed to the intersection between disinformation and emerging technologies, such as artificial intelligence and machine learning.
Deepfakes are digitally manipulated videos, images, and sound files that can be used to appropriate someone’s identity—including their voice, face, and body—to make it seem as if they did something they did not. So far, deepfakes have largely consisted of manipulating images into celebrity sex tapes, but as professors Danielle Citron and Robert Chesney warn, the leap from fake celebrity porn videos to other forms of falsified content is smaller than we think.
Until recently, such realistic, computer-generated content was only available to major Hollywood producers or well-funded researchers. However, rapid advancements in technology have resulted in applicationsthat now allow nearly anyone, regardless of their technical background, to produce high-quality deepfakes that can range from the innocuous—such as depicting a friend in an embarrassing situation—to the incendiary—such as a world leader threatening war. The proliferation of seemingly authentic, but actually manipulated, content at a time when it is already difficult to determine content authenticity is highly concerning.
As the prevalence of disinformation in society has become clearer, governments and non-profits have started to fund research on the impact of fake news on societies and political systems. But this only addresses part of the problem, leaving out key emerging technologies, such as artificial intelligence and machine learning that are already fueling the next disinformation wave. For example, in late March, the Hewlett Foundation announced $10 million for research on digital disinformation and its influence on American democracy, but with no specific calls for research on deepfakes or other emerging technologies. Given the potentially devastating threat deepfakes could pose, this is a missed opportunity to get ahead of the problem and improve our understanding about deepfakes and their potential for harm. Similar initiatives in the European Union heavily emphasize understanding and combating the current brand of fake news, rather than preparing for these more advanced disinformation threats.
These research and development efforts should also go hand-in-hand with strong, public digital information literacy programs on how to identify distorted media and falsified content, including from emerging technologies. In 2017, California lawmakers introduced two billsrequiring teachers and education boards to create curricula and frameworks focused on media literacy. However, to have the most impact governments must also engage non-profit and private sector expertise to help the public better understand the technical issues at play, thereby improving their ability to identify real content from fake content.
In its coverage of the rise of deepfakes across the internet, the tech media site Motherboard stated that we are “truly fucked,” predicting that it won’t be long before the public becomes embroiled in chaos over these emerging forms of disinformation. But we don’t have to feed the fear. Rather than pass hasty and ineffective legislation, governments can work with nonprofits and the private sector to direct resources to relevant research on emerging technologies. Equally important will be more support for programs that educate the public on identifying disinformation threats based on both old and new technologies.
Deepfakes are at the cutting edge of the disinformation landscape right now, but who knows for how long? If governments and non-profits act strategically, they could even find themselves ahead of the game.
By Spandana Singh, YPFP Cybersecurity & Technology Fellow
The future of warfare may be coming faster than we think.
That, at least, felt like the conclusion of Tuesday’s panel on “Imagining War in 2030”, organized by the Project for the Study of the 21st Century and the British Army Intrapreneurs’ Network [BrAIN]. With dozens of military and civilian attendees packed into a relatively airless conference room in Whitehall, a panel of leading experts sketched out what looks to be a period of massive technical, geopolitical and deeply unpredictable change.
Royal United Services Institute Futures and Technology fellow Elizabeth Quintana sketched out some of the technical breakthroughs coming down the line as nations invest in new cyber, electromagnetic and growing technologies as well as hypersonic and other weaponry. Russia, she told the audience, already had a semi-autonomous humanoid robot that could fire a gun and which they intend to send to space.
Former Director Special Forces and Commander Field Army Lieutenant General Sir Graeme Lamb outlined how the pace of change was now proceeding much faster than anyone had anticipated. The year 2030 might be only 13 years away, but breakthroughs in quantum computing, artificial intelligence and other fields were all producing breakthroughs at considerable speed. They would produce potentially massive societal and other changes, and government and military institutions were not currently keeping pace.
Kings College London lecturer and former Foreign and Commonwealth Office official Samir Purioutlined how he had seen some of these changes in action as an OSCE observer in Ukraine. Different nations would demonstrate their geopolitical ambitions in different ways in the years to come, he suggested, pointing out that while a host of states including Britain, Iran, Russia and others have their own imperial memories, they were of very different empires and shaped very different regional and global aspirations.
But not everything would change, he cautioned – it was entirely possible the US and its allies would still be embroiled in the Afghanistan war at the end of the next decade.
Balancing technology, structures, career paths
Unsurprisingly, there were a range of different views on how the military and other institutions should and could adapt to such an unpredictable future. Some questioned to what extent traditional military “pyramid” shaped hierarchies could possibly adapt [although Lieutenant General Lamb argued that while flatter hierarchies have their strengths, outright conflict required much greater resilience than they could offer].
While traditional Western militaries concentrated on traditional war fighting [phase 1 operations and upwards, in UK military terminology], many of the West’s adversaries were becoming much more adept at operating below that threshold, within “phase zero” operations. That trend was only likely to intensify in the years to come, he argued.
Most attendees felt that keeping pace with current changes in cyber and other domains was proving challenging enough, but relatively near-future breakthroughs in artificial intelligence and machine learning was felt set to provide even greater changes. While current drone warfare has actually proved very “human intensive” given the number of intelligence and other individuals involved in targeting and assessment, there will be inevitable moves towards artificial intelligence performing some if not many of those tasks. Where lines are drawn – particularly on the decisions to take human life – will be highly contested, and non-Western potential foes may be much more willing than ourselves to take such steps. [”The Russians tend to trust machines more than they trust people,” said Elizabeth Quintana, pointing to a trend she traced back to Soviet times].
Integration and flexibility would be key to handling these new trends. Lamb said he expected a special forces team of the near future would also be integrated with robotic/artificial intelligence capabilities – although what exactly that would look like was another matter.
Some attendees questioned whether the modern British Military was truly flexible enough to keep track of such new trends – although there was clearly plenty of enthusiasm for doing so.
Building the systems and processes for that would be key. As US military historian Thomas Ricks [himself paraphrasing US General Omar Bradley] once said, while might talks tactics, professionals talk logistics, real insiders focus on career structures to determine what really gets done.
Taking the debate forward
This event was the first of several planned by PS21 to explore the world of 2030 [you can read a range of pieces exploring that world on the PS21 website here]. We will also be holding further events with BrAIN later this year and into 2018.
Peter Apps is Reuters global affairs columnist and executive Director of the Project for Study of the 21st Century. He is also a reservist in the British Army and member of the UK Labour Party. You can follow him on Twitter here
What explains the rise of virtual, ideological terrorist networks in the West?
By Linda Schlegel. Linda is currently pursuing an MA in Terrorism, Security, and Society at King’s College London.
With the rise of the so-called Islamic State new questions for terrorism research emerged. Especially the social media use of the organization has both fascinated and worried practitioners and academics alike. One of the most worrying features of the new, virtual display of Salafi-jihadist ideology is the increasing number of people from all over the world, who seek to join this movement. We as societies need to ask ourselves what may drive young people towards this type of ideology. One of the possible underlying mechanisms for increased online radicalization from a sociological point of view is explored in the following by showing that today’s youth may be easier influenced in an online setting than older generations were.
Habitus in the age of modern communication technology
Pierre Bourdieu showed that humans are socialized in a certain milieu defined by our standing in society and thereby develop a shared set of behaviours with those socialized in similar circumstances. This shared set of practices of social interaction is called habitus. Those sharing a habitus understand each other more intuitively due to their similarities in social dispositions, while those socialized in very different circumstances, who therefore developed a different habitus, do not. For Bourdieu, the habitus is based on class, but globalization eroded traditional social milieus. Modern communication technology (MCT) is available to a majority living in the West crumbling traditional limits of access due to class. It can be argued that almost equal access to the MCTs in the West resulted in a similitude of habitus by those, who grew up using them. Social media makes socialization processes similar in one specific aspect, the online realm, creating shared dispositions and therefore the ability to intuitively interact in the virtual world. Following Prensky, individuals socialized using MCT are called ‘digital natives’.
What does this mean for the rise of virtual, ideological networks in the West? A habitus creates a shared basis for interaction and similar behaviour. This, in turn, leads to more trust in those, who display similar social dispositions and therefore makes it easier to construct one’s identity on the basis of a group sharing the same habitus. The same is true for the online realm, which partially explains the rise of terrorist networks as digital natives are likely to consume and perceive online propaganda differently, display more trust towards it and more easily commit to an ideology they are exposed to online. Terrorists networks expanded in the West partially, because digital natives are more likely to be able to form emotional bonds online and construct their identity accordingly.
There are three interrelated factors that contribute to a bottom-up rise of extremist networks in the case of digital natives: Familiarity, trust and cognitive belonging.
Familiarity
Firstly, digital natives find, access and navigate online environments more easily than older generations. The digital world constitutes a familiar environment for potential recruits and they navigate it intuitively. Importantly, they also find familiarity in the interactions with other digital natives, who share their habitus, and are therefore likely to communicate in a similar manner; something that develops naturally from a shared habitus and cannot be learned.
Trust
Secondly, while mature users tend to be cautious and aware of virtual dangers, younger generations associate online interaction with positive feelings and display a lot of trust in their virtual peers. This combination of trust and positive feelings associated with online contact constitutes a ‘cognitive opening’ for digital natives, making them more susceptible to ideas propagated by their peers. This condition is exacerbated by the tendency of online communities to create ‘echo chambers’: Once within an extreme environment, counter-messages are unlikely to reach the potential recruit. Similar to Facebook, which shows its users only what they ‘liked’, jihadi echo chambers display only messages in alignment with their ideology. Trust in the messenger, a fellow digital native, leads to more trust in the message, which is also increased by the virtual ‘echo chamber’.
Cognitive belonging
Trust is a necessary pre-condition for the third factor: cognitive belonging. Digital natives display intuitive knowledge of online interactions due to their partially similar socialization; their habitus. Some potential recruits become involved in terrorist movements, because they seek a feeling of belonging or identity, which is easier constructed in a group containing individuals similar to oneself. Despite its global reach, the shared habitus enables identity construction rested on a perception of a virtual ‘imagined community’ of similarly socialized individuals. This identity construction is achieved both through passive and active engagement with the ideology. On the one hand, when ideology is conveyed in familiar terms, it is easier to relate to. This is achieved, for instance, by utilizing Western foreign fighters to share their stories. This familiarity in messaging, only possible through similar socialization, is a tremendous advantage for recruitment. Messaging matters not only in terms of content, but also in terms of delivery. On the other hand, digital natives are used to highly interactive environments. If a group provides this room for expression, it creates an environment of constant negotiation and re-negotiation of ideology and identity. Today’s radical online communities are not only passive receivers of propaganda, they are active negotiators of the ideology.
Not every digital native is more susceptible to radical ideologies. In an online setting, however, they are more likely to perceive an online community as important and real, and, if the community is radical, are more likely to adhere to radical ideas through online interaction. One possible implication of this is that the constructors and conveyors of counter-narratives should be digital natives as well. An excellent counter-narrative will not lead to desired results if it is not received by the intended audiences in alignment with their expectations on online interaction. It is likely that the same messages have very different results depending on which generation verbalizes them. An educational effort by digital natives for their peers with content constructed by them is likely to increase the legitimacy of the counter-message due to increased trust and familiarity. It could therefore facilitate the effectiveness of counter-radicalization. Social media changes our lives and it changes the faces and mechanisms of terrorism. We need to be aware of these developments in order to counter them directly and effectively.
PS21 is a non-national, non-governmental, non-ideological organisation. All views expressed are the author’s own.
Monday 10th October 2016. Drinks from six p.m., discussion from 630 p.m. Neo Bankside, SE1
From the shanty towns of Lagos to the rise of Brexit and Trump, crowdsourcing to video on demand, changing technology is revolutionising society and politics round the world. How are modern political and media networks evolving? What does that mean for changing prower structures? How does it differ between the developing and the developed world? Where will it all go next? PS21 pulls together an expert panel to examine the changes seen so far and asks where these trends will take us next.
Peter Apps (moderator) – Reuters Global Affairs Columnist and PS21 Executive Director
Emmanuel Akinwotu – Journalist based in Lagos, Nigeria, writing for Guardian and New Stateman
John Elledge – Editor, Citymetric, member of the PS21 governing board
Eleanor HarrisonOBE – CEO of award winning charity GlobalGiving UK; the world’s first and largest global crowdfunding comm unity for non-profits, GlobalGiving.co.uk
Aaron Bastani – Left-wing blogger and founder of Novara Media
Jack Goldstone is an expert on revolutions at the Woodrow Wilson Center and George Mason University and a global fellow at PS21. This article was first published on www.reuters.com.
As world leaders gather in Paris this week to address climate change, they will labor under the shadow of recent attacks by Islamic State. Yet as they think about climate issues, they should remember that the connection between climate change and Islamic State – and more broadly, between climate change and political instability – is not just a coincidence. It may instead be the key reality of the 21st century.
The rise of IS was a direct result of the failure of the Syrian regime, as it was beset by urban uprisings in 2011. Yet those uprisings did not come out of nowhere, and were not merely inspired by protests in Tunisia, Libya and Egypt. Syria was an increasingly prosperous country in the 1990s, with its various ethnic and religious groups working together in cities.
Yet between 2006 and 2009, Syria was crippled by its worst drought in modern history. A recent article in Proceedings of the National Academy of Sciences showed that this drought was not natural. Rather, hotter temperatures and the weakening of winds that bring moisture from the Mediterranean were likely the region’s reflection of rising greenhouse gas emissions, according to computer simulations.
Combined with poor water management and government neglect of farm conditions, the drought caused a collapse of farming in northeastern Syria. Seventy-five percent of farmers suffered total crop failure, and 80 percent of livestock died. Around 1.5 million farming families migrated to cities to look for work and food, joining millions of refugees from Palestine and Iraq. The added burden these refugees placed on Syria’s cities, and the distress of the farmers who lost their lands due to the drought, helped fuel the spread of rebellion against the Assad regime.
To be sure, climate change is never the single most important cause of conflict; it is what academics call a “structural threat.” Governments that can respond to such threats – because they have popular and elite support, have resources to respond to challenges, are willing to deploy those resources to distribute food and aid to the needy, and have diversified economies that can produce jobs – are not going to be shaken because of global warming. If we lived in a world where all regions were led by such governments, then climate change might be an economic burden and force changes in our lifestyle, but it would not bring the threat of state breakdown and civil war.
Unfortunately, Central America, most of Africa, the Middle East, and much of South Asia are dominated by precisely the wrong kind of governments. These regions have too many fragile states where large segments of the elite or populace distrust the government because of ethnic, religious, or economic exclusion; where governments have limited economic resources to respond to humanitarian crises; where governments are disinclined to respond to problems among marginalized groups or regions of their country; and where the economies are too dependent on agriculture or mining, and so cannot provide work for people if they are forced to move.
In such countries – or worse, in clusters of such countries – a spike in food prices, a severe drought or a ravaging flood can provide a harsh test of government. And where one government fails, the ensuing conflicts can spread to other fragile states and inflame an entire region.
Today the world is seeing an epidemic of failed states: Libya, Syria, Iraq, Yemen, Afghanistan, Nigeria, the Central African Republic, Somalia and Mali have all lost control of parts of their territory. In every case, the weakening of state authority has created space for militants, and particularly for IS, to recruit followers and conduct operations. The conflicts have also sent massive waves of refugees to a Europe that is unprepared to handle them.
Think now of a world in which the population under age 24 in Africa has increased by 500 million people, and the populations of Syria, Afghanistan, Iraq, Palestine, and Yemen have increased by over 100 million people. That is the UN’s projection for 2050. Add to this mix a combination of severe droughts, devastating floods, crop failures, and massive migrations that create collisions and heightened competition among ethnic and religious groups struggling for land, resources and incomes. Then think of how the governments of these regions could and would respond to such crises, and whether Europe and other safe havens could absorb even a tiny fraction of the resulting refugees.
If such a world exists one day, the current crisis in Syria and the actions of IS terrorists may be multiplied many fold.
World leaders in Paris should therefore focus on their opportunity to remove one of the key drivers of potential state breakdowns and terrorism in the future, by adopting vigorous measures to halt global warming.
It is already too late for modest measures to address global warming. As the study of Syria’s drought shows, the weather pattern changes, depriving fragile regions of adequate rainfall, are already underway. Preventing further disasters will require more than just holding the line at today’s levels of carbon emissions in China, the United States and Europe. Africa’s current carbon footprint is tiny, as its population is so lacking in access to energy that each African produces less than one-seventh as much carbon dioxide as each Chinese. Yet by 2050, if Africa were to emit as much carbon per capita as China does today, Africa’s carbon emissions would be as much as China and the United States combined produce today.
In other words, if Africa advances just to Chinese levels of fossil fuel consumption by 2050, then even if today’s major emitters manage to stop all of their own emissions growth, total global emissions will still grow by 40 percent by mid-century, blowing past the carbon budget required to keep total temperature rise within the two-degree limit recommended by the International Panel on Climate Change to avoid severe climate deterioration.
To accommodate necessary growth in energy use in Africa – vital to making the countries of Africa more resilient and better able to provide jobs and security to their growing populations – the world must move quickly on two fronts. The major emitters must first find ways to quickly reduce their carbon output from today’s levels. And they must develop low-carbon pathways for economic growth so the rest of the world can develop without creating new structural threats for political crises.
These goals can be met. If the United States, Europe and China all reduced their carbon emissions by 20 percent, other developing countries could increase their carbon emissions by almost one-third without an increase in world carbon output. That should be the goal for the next 10 years.
Beyond that date, it is critical to find ways by which all countries can escape dependence on fossil fuels for their economies, and reduce global emissions while still promoting global economic growth.
Terrorism thrives among weak and failed states, and among displaced people. If we are to reduce both in the future, we need to make sure that our climate does not further deteriorate. If we fail to prevent continued global warming, the rise in political temperature may far outstrip the warming of the weather outside.
The Dark Web is not fundamentally evil, though it can be used in ways that are morally reprehensible
Our privacy has been largely compromised in the age of the Internet, but the Dark Web offers confidentiality to both good and bad people
In light of the need to protect civil liberties, there are actually many legitimate uses of the dark web
At the same time, we have seen no end to the horrendous crimes facilitated by the Dark Web
Law enforcement faces immense difficulties in keeping up with the complexity of the crimes committed on the Dark Web, especially in terms of the ambiguity of jurisdiction
We might hope to differentiate between good and bad uses of the Dark Web with advances in psychological analyses and the modernization of existing legislation
On Monday 21st September, Project for the Study of the 21st Century hosted a panel discussion on the dark web.
Please feel free to quote from the report below citing PS21. Contact PS21Central@gmail.com if you wish to reach any of the participants.
Metsa Rahimi (Moderator): Regional head of intelligence, Deutsche Bank
John Bassett: Former GCHQ official, now at Oxford Univeristy
Tim Hardy: Technical writer, commentator and activist
Mike Gillespie: Director of cyber research and strategy, The Security Institute
The Dark Web is essentially an underlying level of the Internet that enables anonymous access to users. While the Dark Web may be employed in complex capacities to thwart authorities and potentially manipulate national security, this is simply a technology that offers concealed operation of the Internet.
Tim Hardy: Most Internet traffic is “dark” because it is not indexed. That’s what [the Dark Web] really means. It’s the unindexed web and anyone above a certain age remembers in the days before Google, most of the Internet was dark. If you didn’t know the address of a website, you weren’t going to find it. But the term Dark Web or Dark Internet is used in a very casual way by journalists.
John Bassett: I think you’ve got, in simplistic terms, an evolutionary chain. At the bottom of that chain, there are ordinary, decent criminals (as we used to call them) doing ordinary, decent criminal things on the Internet not very well. At the next stage up we have Law Enforcement who are generally stopping these ordinary criminals but are completely thrown by what I call the Dark Ones, the much more sophisticated people who think they can master the Dark Internet. The fourth level is national security, which at the moment is vastly ahead of the most advanced hacktivists but are very, very busy with other stuff. I think the size of the relative gaps between the ordinary criminals, law enforcement, the Dark Net people, and the national security apparatus gives you a sense of the pace of evolution in cyber security
The Internet has deeply eroded our sense of privacy. The Dark Web actually provides a channel through which to conduct your affairs without fear of scrutiny.
Mike Gillespie: The challenge here is that the Internet has opened up a huge amount of opportunities for communications, for doing business, for global problems, and all sorts of things. It’s also completely eroded the privacy that we used to enjoy. You know, maybe we have crossed the point of no return when it comes to privacy because we now live in an era where Apple and Google and the Android Foundation and organizations like this implicitly believe that they have a right to your privacy.
So we have, “the dark net.” Yeah, it sounds really evil, “the dark net.” It’s basically just an underlying level of the Internet, allowing secure, anonymous use of Internet. It actually allows Google and Apple not to see what you’re doing, if you configure it right.
The technology, in and of itself, is not fundamentally evil. Rather, certain individuals use the Dark Web maliciously.
Hardy: There is evil material out there and there are evil people in the world. I think we’ve got to be careful not to conflate the technology with the way some people are using it. There are many legitimate uses of these kinds of uses. I mean Tor comes out of American military research…it was developed and continues to be funded by government US sources because it has a legitimate function; there is a legitimate role for privacy.
Gillespie: Technology is just technology. And it will always be used by both good people and bad people. But what the dark net does do is it give us a secure and anonymous way of using the Internet for communications, and business and research and it allows a whole load of people in parts of the world where they would not be able to use these technologies to get their message out to the world. So, let’s not demonize it because of a small number of people.
You could have argued back in the 1800s that because there was slavery, we should’ve stopped people from using cargo ships. Actually, it’s not the technology, it’s not the communication method, it’s not the trade process that is the problem. It’s understanding, actually what’s the underlying reason why this crime is happening? And fixing that. Because, otherwise, everything else is just sticking plasters. And what we’re actually doing is putting a sticking plaster on a gangrene sore.
We must be careful to differentiate between legitimate uses of the Dark Web, and those that may not be morally acceptable.
Gillespie: Fundamentally, actually a vast number of users on the dark net are not pedophiles and criminals and terrorists. They’re people in oppressed regimes, they are using it as a means of getting their blogs out, getting their messaging out. It’s used by academics and researchers. It’s used by R&D organizations. It’s used by the public. It’s used by parents in many cases to protect their children from the evil that is corporate foundations that are looking to steal their identity and use their information. So actually, there’s a huge amount of legitimate usage on the dark net.
Hardy: Hardy: There’s a dual function here. It enables people to push beyond what is acceptable and that can be a terrible thing but it’s also healthy for societies. I think the more we communicate, the more people talk about things, the better – even in cases of extremism. We’re not going to stop extremist political narratives by denying people a voice. It is better to draw people out and get them talking and to start challenging their ideas. If we drive people into isolation and close off all channels, then it becomes self-fulfilling.
The danger of saying we are going to outlaw a tool that could be used for terrible things is that it also suggests that we have reached the end of history and we are not going to change any more. There are arguments about drugs. Another notorious thing about the Dark Web was the website Silk Road, which was basically an eBay for drugs – some would argue that is the end game for the War on Drugs – but it is only if we have already decided that drugs are a bad thing – that the world is fixed, that we have reached the end of history and know what is morally correct – it’s only if that is the case that we don’t need spaces where people can experiment and take risks and do things that are currently illegal or on the edge.
The Internet has actually empowered individuals operating against oppressive regimes. Yet, the Dark Web has in turn strengthened the capacity of the state to identify and penalize dissidents operating undercover.
Bassett: 25 years ago, we sat there at the end of the Cold War and wondered, what is going to happen next? How is it going to play out? One of the themes was the possible diminishing roles of the state. On the contrary, I now fear that one of the things the Internet will become is something that very much strengthens the power of authoritarian states at the expense of individual.
In a liberal democracy, we should always be cautious about the power of the state and the degree of oversight of that. And that’s where most of us here today are. But this bleak future is a global one especially in those states that aren’t particularly democratic, or aren’t at all democratic. This future looks really good for authoritarians, and there’s a small example of this just in the aftermath of the Arab Spring, in which protesters made extensive use of social media which in the aftermath was immensely useful to secret policemen who were soon busy with their pliers and knives. An authoritarian regime can find out who the trouble-makers are and too often are doing and have taken them all away.
Law enforcement officials face many obstacles when targeting cyber criminals on the Dark Net.
Bassett: One of the problems we have at the moment, perhaps the one that the Dark Net brings most immediately to mind, is the challenge for law enforcement organizations to really get a grip on technically sophisticated cyber criminals. That’s quite a wide gap and it’s problematic. That has both impact on the individuals, but particularly on the business sector. If a bad guy really gets to the level to be a threat to national security then the game is over very quickly.
Gillespie: Jurisdiction is a massive issue, you know? Actually, the police are behind the curve when it comes to prosecuting basic cybercrime on the Internet, let alone managing the issue of criminality on the dark net. We know that here in the UK we are massively behind the curve, and we struggle to understand, we can’t even join it up across different counties, let alone across different countries. When you get mugged in the street, the location of the crime, the victim, and the perpetrator is where? It’s a street, yeah? When you get mugged online, where are you? Where are you when you’re online? Is it where you sat physically, or is it where you were working electronically? Where is the criminal? Where is the actual crime taking place, and whose jurisdiction does all of that fall upon? Actually, that’s the biggest problem. We have no strategic approach to deal with this holistically and globally.
Developments in psychological analyses and the modernization of existing legislation can help to identify good versus evil on the Dark Web.
Bassett: We look at Breivik, the mass murderer, and see what other characteristics such a person has on the Internet. How can we spot this behaviour before it ends up resulting in mass murder? And people are likely to identify such trends either using behavioural psychology or just using sheer quantative analysis of previous behaviour. Depending on the acceptability of false negatives and false positives, it’s not hard to imagine having a system that scans the data to identify people that have characteristics, whether quantative or psychological, of someone who may become a mass murderer.
There’s a body of legislation from the 1990s which was written and debated by people who barely used the Internet, and which is now antiquated… The Anderson position is, crudely speaking as I understand it, that there is neither too much nor too little interception going on, but that the legislation for it is scattered all over the place. I think if it is purely a modernizing bill, I would think that should claim support across the House, frankly… The idea of a bill, which is purely a modernizing bill, and keeps essentially the same capabilities, recognizing we are now in 2015 and not in 1995, I think is a good thing, something we should welcome whatever our position on civil liberties and so on.
Report by Amanda Blair. Transcript by Amanda Blair and Yaseen Lotfi.
Robert Colvile is a freelance journalist and global fellow at PS21, formerly head of comment at Telegraph and news at BuzzFeed UK. He tweets at @rcolvile.
I don’t think Twitter’s doomed at all. But I do think Twitter as we’ve known itis dying. And the verdict will be both murder and suicide — on the part of both Twitter’s management and its keenest users.
Twitter became popular, in essence, because it was where the cool people came to hang out. Celebrities and journalists came to talk to their fans, and each other.
For British journalists like me, for example, it was a godsend — it provided a way for us all to bitch and gossip and stroke each other’s egos in a way that we hadn’t had since the move out of Fleet Street scattered the big newspapers to the four winds. Even better — it turned out there were quite a few people out there who didn’t actually work in journalism, but were still just as funny and clever as us hacks, and often even more so. It was like being one of those great pubs where you’re pretty much always guaranteed to bump into someone you know.
What changed, as Umair and other disillusioned tweeters have said, was the signal to noise ratio. Partly, this is down to the problem of abuse that Umair identified — the way that the seemingly infinite horde of trolls and zealots and bores would intrude into conversations or shout you down (especially if you were a woman).
But there was a stylistic change as well. Journalists, and news organisations in general, noticed that Twitter was a good way to pick up traffic. Well, not traffic as such: a report back in January claimed that it’s responsible for driving less than 1 per cent of traffic to sites, compared with roughly 25 per cent for Facebook. Still, Twitter links remained important because it was a good way to reach an influential audience — plus, people often go on to repost the same links on Facebook, where the traffic magic happens.
Gradually, over time, people worked out the hierarchy of attention. A tweet like this wouldn’t do that well:
A picture would do better:
As would a screenshot:
Or best of all, a bit of movement:
Twitter has always been a place for link-dumping: on a busy day, you’d drop in the pieces you’d written or edited then get back to your deadlines without stopping for a chat. But on top of that, it’s now a visual experience as much as a written one.
I was talking to a friend about this — one with an order of magnitude more followers than me — and he mentioned that it was increasingly rare for a tweet to take off without a picture attached. Many news organisations now actively order their staffers to include them when promoting pieces; meanwhile, Twitter’s own links pull in pictures even without you asking them to. But all that in turn makes the conversation harder to pick out, without carefully curating your feed: it’s buried within a mountain of tweets all screaming “SHARE ME!”.
It isn’t just the users who are powering this shift: it’s Twitter’s management as well.
Twitter’s problem is that it had crafted an invaluable public space — one which emerged almost by accident, with the users themselves coming up with many of the conventions, such as RTs or hashtags, that gave the service its power.
But the new economic model isn’t about offering a service — it’s about growth. Remorseless, exponential, competitor-trouncing growth. (I get into the reasons in my new book, out in April — pre-order your copy here…) Signing up two million people in a quarter wasn’t something to shout about — it was a sign you were lagging behind.
Twitter couldn’t be a great pub conversation and hit its targets for audience and revenue growth. It had to become a public meeting hall, even if that meant that it was full of people shouting over each other, or carrying around giant advertising billboards.
To see where Twitter’s going, consider the Moments service. This is a really fascinating idea — to aggregate the best content on each separate micro-story and make it seamlessly available. To borrow the pub metaphor, it’s like having someone sitting in the corner who’s actually saying sober, writing down all the funniest lines and editing out the dross (exactly what Boswell did for Johnson, in fact).
I wanted to learn more about Moments, so took a look at the guidelines and principles Twitter has published. And what leapt out was this sentence at the bottom:
In other words, just like those media brands, Twitter sees Twitter as a visual medium: it thinks people want less talk, and more video.
In the grand scheme of things, it’s almost certainly the right decision: one of the big trends on the web is that images are eating text, and video is eating images. If Twitter wants to be one of the big boys like Facebook or Snapchat — and that’s certainly what its investors want — then it has to follow the big trends. The problem is that the bigger it gets, the more packed with images and GIFs and self-promotion and shouting, the further it moves away from what made it so attractive in the first place.
This article originally appeared on Medium on October 22, 2015.
PS21 is a non-national, non-ideological, non-partisan organization. All views are the author’s own.
Take a look at these five posts to better understand what the future holds (spoiler: robot overlords).
Deepnet: Is the ‘dark web’ good or evil? In this article, Mike Gillespie, director of cyber research and strategy at The Security Institute, looks at how the Deepweb is used by different groups.
Technologies have been used for good and evil ever since we have had technology. Deepweb is another technology, just like mobile phones, that is being used by criminals to stay off law enforcement radars and enable them to carry out their illegal activities with less risk than on the regular web. But as mentioned earlier many people use Deepweb tools. Sometimes this is in life and death situations and under terrible conflict situations, like those in Syria. The anonymity offered by Tor enables activists and campaigners to get their messages out of totalitarian regimes that threaten their lives every day and where the fear of arrest and torture are very real.
Indeed, an initial survey shows a wide spectrum of budgets and organizational mandates, suggesting that the construction and direction of a national innovation foundation may still be as much art as science. Nevertheless, the best national innovation foundations and strategies are lean and nimble, able to shift their operations and priorities at the speed at which modern innovation and technological development unfolds.
It is perhaps both a symptom and a cause of a modern global malaise that not only is the role and function of cities being questioned but also the viability of their very continued existence. Like anything in nature that aspires to gargantuatism, cities have moved beyond the bounds whereby the structures and frameworks that first allowed them to prosper and thrive can continue to support the monsters that they have become.
Different industries face different threats and regulation still has a substantial role to play in shaping risk profiles. In our view, the industry probably needs to stop trying to bundle so many disparate issues into a single product. The industry and its customers will all benefit from the evolution of specialist products. The risks cannot be effectively underwritten unless the data has been defined, protection policies understood, the consequences of breaches identified and employees trained in prevention procedure.
PS21 Report: The Future of Drones: Finally, check out our report on Drones from an event held this past June, with comments from Ryan Hagemann, Erik Lin-Greenburg and Lisa Ellman.
Lisa: Drones can represent anything from a toy, a model aircraft that you fly at the park, to a tool of industry… Now, as we’re seeing, to even a tool of war. Over the last several years, these consumer toys have gotten a lot more sophisticated, a lot smaller, more mobile, and able to do sophisticated things… They’re also cheaper. The technology has moved forward at a very rapid pace.
• Despite massive growth in drones, will never replace piloted military aircraft entirely • Massive potential for commercial sector growth • US domestic regulatory environment lags behind other countries • Clarity slowly emerging amid furious lobbying • Multiple privacy, safety, confidence and other issues • US lagging behind some other countries
On Thursday, June 11, 2015, the Project for Study of the 21st Century (PS21) held a discussion on ‘The Future of Drones’.
A full transcript can be found here and video here:
The panelists were as follows:
Ryan Hagemann(Moderator): Civil Liberties Policy Analyst at the Niskanen Center and adjunct fellow at TechFreedom, specializing in robotics and automation.
Erik Lin-Greenberg: Former US Air Force Officer and PhD candidate at Columbia University.
Lisa Ellman: Counsel for McKenna Long & Aldridge LLP. Member of the firm’s Unmanned Aircraft Systems (UAS) Practice Group and Public Policy and Regulatory Affairs practice. Former senior roles at the White House and Department of Justice.
Participants were speaking as individuals rather than as representatives of institutions.
Please feel free to quote from this report, referencing PS21. If you wish to get in touch with any of the panelists, please email: PS21Central@gmail.com.
Drones are not a new phenomenon but have become more useful with advances in technology.
Lisa: Drones can represent anything from a toy, a model aircraft that you fly at the park, to a tool of industry… Now, as we’re seeing, to even a tool of war. Over the last several years, these consumer toys have gotten a lot more sophisticated, a lot smaller, more mobile, and able to do sophisticated things… They’re also cheaper. The technology has moved forward at a very rapid pace.
Military use has grown exponentially.
Ryan: … 25% of the total aircraft fleet wielded by the U.S. Air Force is now drones as of 2012 as opposed to 2001 where it was something like… 2-3%.
Erik: I think you’re going to see an increase in the number of states operating Remotely Piloted Aircraft (RPA). Currently, I think the number right now is something like 72 states operate some type of RPA.
Still, for now drones remain one of the least glamorous corners of the US Air Force. For a variety of reasons, they are unlikely to replace manned military aircraft entirely. They are insufficiently survivable in complex war zones against sophisticated adversaries — and even growing suspicion within conventional military aviation poses its own problems.
Erik: Policy makers need to pick the right tool for [a] particular policy objective… Drones are great for certain things, but they really don’t have the payload of a manned bomber.
Right now it’s not cool to be a drone pilot. There are these attempts to make slight changes in the cultural perception in the military about RPA versus manned aircraft, but…there is political resistance to having a majority unmanned fleet.
The greatest immediate opportunity for growth remains within the commercial sector.
Lisa: Real estate agents want to be able to take pictures of the homes that they’re selling. Oil and gas companies want to be able to use drones to inspect infrastructure… their power plants, power line inspections – tasks that are dirty or dangerous or dreary… Farmers want to use drones to inspect their crops and crop dust [and to] spray pesticides and water on their crops. Facebook and Google want to be able to provide wireless Internet all around the world… Amazon wants to use drones to deliver packages.
News gatherers and film producers are…excited about the use of drones…because helicopters are so dangerous.
But the technology in the U.S. has moved more quickly than the policy-making.
Lisa: You have this very strong demand to be able to use drones commercially, but it’s actually illegal right now in our country to use drones commercially, unless you have special permission from the Federal Aviation Administration (FAA).
Drone flights are still regulated as if they were manned aircraft
Lisa: There are a few different categories…Hobbyist use, commercial use, and public use…The idea is that law enforcement and public agencies will use them in very particular circumstances.
Right now the law is structured so that hobbyists can, for the most part, do whatever they want…I can fly my drones and take photos of myself and put those photos on Facebook. That is a totally legal flight. If…[I] sell these photos for $10 a piece…that is an illegal flight and not allowed by the FAA. It’s not a safety or risk-based question. It’s the exact same flight. It’s the intent-based question.
That comes from the aviation world…There are a lot of questions in the public policy community about whether that makes sense in this area.
Erik: There really hasn’t been any kind of specific drone law… As weapons systems evolve, we try to figure out how we interpret existing [international] law in the best way possible for military operations.
This has led to confusion and is inevitably limiting the growth of the industry.
Lisa: Because people don’t understand why they can do something in one set of circumstances, but if they intend to sell things or make money in any way or do something to promote a business, then they actually have to get a permit from the FAA, which is quite a process. They’re kind of straddling that line and that’s been difficult for a lot of folks.
With the legislation still being drawn up, frantic lobbying is underway from multiple interested stakeholders.
Lisa: There have been a lot of public policy efforts on behalf of certain groups of folks. Everyone wants a carve-out for their own industry. Everyone wants to be able to do whatever they want.
With technology still charging forward, there has been inevitable growth in illegal activity.
Lisa: There will be aerial photographers who will come take pictures of your wedding with a drone and I guarantee that they don’t have a 333 exemption to do that. The FAA does not have the resources to police all the illegal activity that is out there, but if for some reason something was to go wrong, if the flight recklessly endangered the public…then there could be real big problems for that person.
The U.S. has not kept up with countries like Canada, Australia, New Zealand, the U.K. and Japan.
Ryan: [The Open Technology Institute] have this great map online where you can click on various countries and it gives you a breakdown of the regulations in different countries surrounding commercial drones…Based on what I see in terms of the market breakdown, it seems like the U.S. is falling very far behind.
This is for a number of reasons.
Lisa: Usually if you’re regulating in healthcare…in energy or [the] environment or food and drugs, you have studies. You have data. You have numbers. You’re able to analyse those numbers and come to a public policy decision. One thing that’s been very different here is that we don’t have the data and some other countries have been ahead of us in terms of collecting data and doing studies…
The other thing is that we have the most complex airspace in the world.
However, the U.S. is catching up.
Lisa: The federal government…really started to pay attention in 2012 when Congress mandated that the federal government integrate drones into our national airspace…
The FAA was given very limited resources to implement what it was asked to do and that is now improving…There have been many tangible steps in the last few months… that will lead to a more open airspace for people who are interested in flying drones…
By the end of next year, I’d be surprised if there is not a final rule broadly authorizing commercial drone operations across the U.S.
Certain restrictions will continue to apply.
Lisa: You can’t fly at night. You have to remain within visual line of sight…You can’t fly in urban locations…
The FAA just announced this pathfinder program where it’s working with specific areas and industries. There is a lot of study and research and development that is going on there…I think we will see beyond line of sight operations. We will see night-time operations. It’s just not going to be right away.
There are still a number of issues to tackle, including privacy.
Lisa: A presidential memorandum on privacy, transparency, accountability and civil liberties… was released the same day as the [proposed FAA rule to authorize the commercial use of drones]. It outlined limits on the federal government’s own use of drones and put together a multi-stakeholder process…
It’s an ongoing conversation…Most states in our union have proposed some kind of rule that would limit drone use because of privacy concerns.
Technology may be able to provide a solution to some public policy issues.
Erik: There is actually a new website called noflyzone.org. It’s not relevant in D.C. since this is restricted airspace, but if I live out in Maryland and have a property and I put my address in, the idea is that participating manufacturers would use geo-fencing technology that boxes out that address in particular.
There are questions that need to be considered in a military context too.
Erik: Is targeting someone else’s drones an act of war? What’s the threshold for actual conflict?
There are, of course, Department of Defense properties here in the U.S. What if someone tries to fly over their facilities? It’s a balance though…You don’t want to consider everything sensitive because news gatherers, for example, would say they don’t want to be censored out of gathering news from perfectly legitimate locations. Where do you draw the line?
And calls for the U.S., as the primary user of RPA, to set a precedent for how RPA are used in military environments.
Erik: One of the things the State Department did a few months ago was relax regulations on armed drones and a lot of analysts said this was a good move because it allowed the U.S. to start exporting RPA to some of its allies and friends and partners. If we export, that limits the audience that will potentially buy China’s drones or Russia’s drones…You’re able to shape the mindset, and, hopefully, it’s a mindset in accordance with international law.
Drones may put a cap on military escalation.
Erik: The use of drones and the presence of drones might make a state more likely to initiate a conflict, but if a drone gets shot down, and we’ve seen this before…our response was fairly minimal. If that had been a manned aircraft, the response would have been very different.
We’re on a learning curve.
Lisa: We don’t know what all the capabilities of drones are. We don’t know all the different harms that they can inflict on us and all of the different benefits that they can provide. We are still at that learning stage where they are just now getting integrated into society in a way that benefits the public in certain ways and also provides certain risks. A lot of that fact-finding will have to be happening over the next several years.
But every industry has a need for drones and, if they haven’t identified that need, they soon will do.
Lisa: We’re going to see…drones increasingly take the place of helicopters…increasingly take the place…of people.
I do think there will be safety incidents. There are going to be privacy incidents…But I think that will all inform the policy making.
And ultimately, the dilemmas posed by drones are no different to those posed by any new technology.
Lisa: Taking a step back, I think that any technology can be used for good and be used for bad. This isn’t the first time that we’ve had new technology where, all of a sudden, we’re worried about what some of the ramifications are if they’re used or overused.
The key is getting that right.
Report by Crisa Cox. Transcript by Christopher Stevens.
This article originally appeared on Aspen Opinion. Download the PDF here.
Tom Allen is Head of Technology and Data Protection Indemnity at Aspen
No Longer Niche
There has been significant growth in market demand for data protection coverage, driven in no small part by the recent surge in sobering news about the aggressively evolving risks that companies face. For a number of years this was a rather specialist, ‘niche’ marketplace that didn’t find much traction beyond a sub-section of interested firms. The risks involved have been seen for years as being cutting edge, if not rather theoretical.
This view has changed over the last 18 months. There has been a steady drumbeat of high-profile losses arising from data breaches which have received plenty of publicity. In 2014 data breaches in the U.S. totalled 783, an increase of 28% over the previous year.1 The trend looks to be escalating as in the early part of 2015 there had already been 174 breaches with 99.7 million records exposed.2
Recent events have revealed the fluid nature of the liability, the adequacy of current cyber security policies on offer and also company management’s attitude to risk acceptance and mitigation for breach scenarios. Attacks on retailers Target in 2013 and Home Depot in 2014 demonstrated the magnitude of the threat and the attacks on JPMorgan Chase in 2014 and Anthem in 2015 confirmed the point. The breach at Sony, late in 2014, highlighted the fact that the release of confidential company information can disrupt not only customer relations but also employee relations. It was not only the reputations of top executives and their clients that were jeopardized by the disclosure of emails. Moreover the unfolding saga was amplified by the media and the data was readily accessed and replicated from the otherwise rather arcane world of download sites.
Governments concerned about threats to national security as well as their economies have engaged in high-profile efforts to ‘jawbone’ businesses into taking IT security seriously. Regulators worried about the rights of individual consumers and investors have moved decisively to press the issues home. President Obama’s 2015 State of the Union address included an update to the 2011 Cyberspace Legislative proposal. This included new initiatives on the all-important breach reporting rules with simplification and standardization of the existing 47 state laws into one federal statute. Elsewhere, the U.S. Securities and Exchange Commission’s Office of Compliance Inspections and Examinations previously announced that its 2014 Examination Priorities will include a focus on technology, including cyber security preparedness. Executives are now much more aware of the financial costs – and the difficulties of estimating them – and also the costs in terms of their career if an incident should show them to be ill prepared. The CEO of Target held himself personally accountable for the breach and resigned in May 2014. The IT and consulting industries have picked up the theme with their corporate customers. Demand for related insurance products has ramped up in the North American market and is gathering momentum in the EU and elsewhere.
Increasing Complexity
Underwriters and brokers have been working to publicize these products for years and are of course delighted that the topic has moved to a more central stage. Yet current events and the general state of public awareness about the issues highlight just how complex a challenge the rise of ‘cyber threats’ poses to the insurance industry.
First and foremost, the increasing complexity and scope of attacks resulting in data breaches must challenge the market’s assumptions about the frequency and severity of losses. Underwriters have always seen the continually evolving threats to IT security as an arms race between hackers and the IT security industry; yet many have been surprised at the ambition and scale of some recent attacks. In this context, pricing models have limited predictive value and need to be constantly re-assessed.
At Aspen, we have always held the view that ‘cyber insurance’ is an unfortunate term, as it seems to mean everything and nothing at the same time. Indeed, not all cyber threats are viewed by the insurance market as being meaningfully insurable – the chief example being the theft of a company’s own intellectual property. Much of the feared impact of cyber warfare sits outside the scope of most commercial insurance offerings. Nonetheless, the desire by many brokers for an allrisks policy approach has resulted in a lot of disparate issues being bundled together as underwriters strive to add new features to their products.
The market trend, until recently, has been for underwriters to seek differentiation as opposed to uniformity. The result is that product approaches, wordings, coverage triggers and so on vary widely across the marketplace as competitors strive to add features. Ironically, in our view, one of the longstanding challenges to the broad acceptance of these products has been their complexity – buyers sometimes struggle to fully understand exactly what they are buying.
Another self-imposed challenge arising from the lack of product uniformity is that it aggravates the difficulty insurers and reinsurers face in assessing their aggregate exposures. This is hard enough given that loss scenarios are based on known/perceived vulnerabilities, which themselves evolve.
Insurance and loss prevention go hand in hand but some of the risks that governments are seeking to transfer into the insurance sector might easily challenge the industry’s capital. At some stage in the future, a different approach may be required for certain risks. As in the case of terrorism, governments could, via a reinsurance grouping, help fund high-level risks of the insurance industry. Facilitation of a market through such an arrangement could increase supply by spreading large losses and help provide data to support more accurate pricing of the risk. It would also help increase demand through encouraging a greater understanding of cyber risks and the financial value of defending against them.
Specialist Approach
Aspen continues to view this evolving area as presenting opportunity along with threat. Our focus remains on risks tied to data protection obligations as well as liability for providers of IT products and services. Different industries face different threats and regulation still has a substantial role to play in shaping risk profiles. In our view, the industry probably needs to stop trying to bundle so many disparate issues into a single product. The industry and its customers will all benefit from the evolution of specialist products. The risks cannot be effectively underwritten unless the data has been defined, protection policies understood, the consequences of breaches identified and employees trained in prevention procedures. While developments in the big picture are continually changing, it is even more important to employ a disciplined underwriting approach with clarity of wordings, transparency of underwriting method, an alert and responsive claims service, and a keen ear for customers’ needs.
References
Identity Theft Resource Centre(ITRC), IDT91, 2015 Data Breach, 11 March 2015
Tim Hardy is a technical writer, commentator, activist and PS21 Global Fellow. He runs the website Beyond Clicktivism and tweets at @bc_tmh
In November 2014, the UN Deputy High Commissioner for Human Rights, Flavia Pansieri declared that the changes in digital communications over the last two decades represented “perhaps the greatest liberation movement the world has ever known.”
Yet that liberation movement was under threat, she warned. And some of the greatest threats came from the countries that most pride themselves on their historic and continued role in the promotion of democracy and liberty worldwide.
The UN adopted Resolution 68/167, the Right to Privacy in the Digital Age on 18 December 2013, emphasising “that unlawful or arbitrary surveillance and/or interception of communications, as well as unlawful or arbitrary collection of personal data, as highly intrusive acts, violate the rights to privacy and to freedom of expression and may contradict the tenets of a democratic society.” The Deputy High Commissioner warned “information collected through digital surveillance has been used to target dissidents. There are also credible reports suggesting that digital technologies have been used to gather information that has then led to torture and other forms of ill-treatment.”
Far from being a historic abuse of power, this is a growing tendency. “Overt and covert digital surveillance in jurisdictions around the world have proliferated, with governmental mass surveillance emerging as a dangerous habit rather than an exceptional measure.”
Sultan al Qassemi noted in February, “Every single country in the Arab world, save for Lebanon, has jailed online activists. Every single country today has individuals in jail for posting tweets.” The Arab Spring has led to a winter of silent discontent as those who were prominent in the days of rage have withdrawn either completely from social media or into closed communities, removing their voices from the wider sphere of public discourse. Any safety in such private communities is of course illusory.
The free speech potential of the online world can have fatal consequences when privacy cannot be guaranteed.
In 2011, Maria Elizabeth Macias Castro was the first journalist to be murdered for social media posts. A note left by her decapitated body by the Mexican crime syndicate Los Zetas connected her to the online pseudonym she’d assumed would keep her safe. Posting under your real name carries proportionally greater dangers. Avijit Roy and Washiqur Rahman were both hacked to death in the street this year in Bangladesh, their murders blamed in part on the government’s crackdown on “known atheists and naturalist” bloggers. Mauritania and Saudi Arabia have issued the death penalty for online postings. According to Reporters without Borders, 19 “netizens and citizen journalists” were killed in 2014 and 175 have been imprisoned so far this year because of their online activities.
Banksy artwork in London
It’s not just public postings on social media that draw attention. Iran – who together with China imprisons a third of the journalists jailed around the world – uses surveillance as part of its strict monitoring of the internet. In 2009, Lily Mazaheri, then a human rights and immigration lawyer although later disbarred, claimed that one of her clients, an Iranian dissident was shown a transcript after his arrest of instant messaging conversations with her that they had assumed were private. Whether or not this was true, we now know that governments can and do monitor private web chat even where there is an expectation of confidentiality.
Lawyer-client privilege is a cornerstone of democracy as is the ability of journalists to protect their sources. Surveillance undermines both. Two years before the Edward Snowden leaks, the then executive director of the Reporters Committee for Freedom of the Press, Lucy Dalglish was approached by a national security official at a conference who, on the subject of exposing whistle blowers, threatened: “We don’t need to subpoena you anymore. We know who you’re talking to.”
Amnesty warned in their annual report last year, “From Washington to Damascus, from Abuja to Colombo, government leaders have justified horrific human rights violations by talking of the need to keep the country ‘safe’. In reality, the opposite is the case. Such violations are one important reason why we live in such a dangerous world today.”
A progressive trend is being reversed and in the countries where democracy is healthiest, there is little political appetite to address this. Those who criticise government surveillance are tacitly or explicitly accused of supporting enemies of the state.
UK foreign secretary Philip Hammond said on a visit to GCHQ Cheltenham last year “Nobody who is law abiding, nobody who is not a terrorist or a criminal or a foreign state that is trying to do us harm has anything to fear from what goes on here.“ Of course, like all who repeat the authoritarian’s mantra “if you have nothing to fear, you have nothing to hide” Hammond presumably still makes love and defecates behind closed doors. A desire for privacy can be nothing more sinister than a demand to be treated with basic human dignity.
Defenders of mass surveillance sometimes underplay its extent by declaring “It’s just metadata”. But metadata is the context of your life – where you go and when, who you associate with, what you read and watch. In aggregate, it’s as unique as a fingerprint and exposes more about you than most people are happy to share with an intimate partner. In 2014, former NSA and CIA director Michael Hayden pointed out “We kill people based on metadata.”
Mass surveillance, so costly for democracy, fails even to achieve its own security goals and wastes resources and funding that could be put towards more traditional intelligence operations.
The NSA claims that their surveillance programme would have prevented September 11 – but that is not supported by the 9/11 Commission Report that found that the intelligence community failed at analysis not at data gathering. Mass surveillance failed to prevent the Boston Marathon bombings even though one bomber, Tamerlan Tsarnaev, had been on a watchlist since 2011 after Russian intelligence warned their US counterparts about him and both he and his brother had made multiple social media postings that should have waved a red flag. Mass surveillance failed to stop the Charlie Hebdo attack.
Just as generals are often accused of always fighting the last war, it seems that intelligence services are always fighting the last terrorist plot – then are blindsided when an extremist changes tactics.
Protesters in Germany
For a long time, digital rights have been side lined as a matter of technical interest only but even before the UN endorsed this position, digital rights have always been human rights. As more of our most intimate moments and experiences occur in the overlap between the material and digital spheres, our sense of betrayal and exposure as our digital privacy is violated becomes ever more acute. The distinction between the online and offline worlds grows more blurred and for the generation more likely to own a home in Skyrim than to ever own one in the material world, any attempt to distinguish between the two is met with suspicion. But there are differences – differences that are significant for the possible futures of democracy. The freedoms we take for granted in the material world in the West are increasingly denied in the digital. As the two merge more and more and the opportunities to opt out recede, the importance of defending these rights becomes more critical.
The absence of privacy, the constant awareness that your conversations, your reading and your online transactions are being monitored has a chilling effect. The writer and security consultant Bruce Schneier warns:
Think of how you act when a police car is driving next to you, or how an entire country acts when state agents are listening to phone calls. When we know everything is being recorded, we are less likely to speak freely and act individually. When we are constantly under the threat of judgment, criticism, and correction for our actions, we become fearful that—either now or in the uncertain future—data we leave behind will be brought back to implicate us, by whatever authority has then become focused upon our once-private and innocent acts. In response, we do nothing out of the ordinary. We lose our individuality, and society stagnates. We don’t question or challenge power. We become obedient and submissive. We’re less free.
Edward Snowden is a divisive character – but whether or not your politics inspire you with a desire to shoot the messenger, his revelations cannot be ignored. The US and her allies have systematically undermined the security of the internet, damaged the reputations of their countries, undermined their ability to challenge authoritarian regimes and placed their citizens and the citizens of other sovereignties under an unprecedented level of mass surveillance.
There is an opportunity here. We can continue to participate in a global trend towards greater repression in the name of security and freedom. We can continue to give succour to regimes that monitor their citizens for the overt goal of silencing all dissenting voices. We can continue to build a machinery of totalitarianism that we hope but cannot guarantee will not be put to malevolent ends. Or we can take back the moral lead. By making the defence of privacy online a core principle rather than treat it as a liberal qualm to be belittled and ignored, we can help ensure that the next two decades see a continuation of the global trend towards democracy and freedom enabled by the internet and not its calamitous reverse.