Facebook, drag artists and data protection dilemmas: ‘if you stand on our pitch, you must play by our rules’

July 31st, 2015 by Robin Hopkins

Facebook is one of the main battlegrounds between privacy and other social goods such as safety and security.

On the one hand, it faces a safeguarding challenge. Interactions through Facebook have the potential to cause harm: defamation, data protection breaches, stalking, harassment, abuse and the like. One safeguard against such harms is to ensure that users are identifiable, i.e. that they really are who they say they are. This facilitates accountability and helps to ensure that only users of an appropriate age are communicating on Facebook. The ongoing litigation before the Northern Irish courts in the HL case raises exactly these sorts of concerns about child protection.

Part of the solution is Facebook’s ‘real names’ policy: you cannot register using a pseudonym, but only with your official identity.

On the other hand, Facebook encounters an argument which runs like this: individuals should be free to decide how they project themselves in their communications with the world. This means that, provided they are doing no harm, they should in principle be allowed to use whatever identity they like, including pseudonyms, working names (for people who wish to keep their private Facebooking and their professional lives separate) or stage names (particularly relevant for drag artists, for example). The real names policy arguably undermines this element of human autonomy, dignity and privacy. There have been colourful recent protests against the policy on these sorts of grounds.

Which is the stronger argument? Well, the answer to the question seems to depend on who you ask, and where you ask.

The Data Protection Commissioner in Ireland, where Facebook has its EU headquarters, has upheld the real names policy. When one of Germany’s regional Data Protection Commissioners (Schleswig-Holstein) took the opposite view, Facebook challenged his ruling and secured a court victory in 2013. The German court suspended the order against the real names policy and, equally importantly, decided that the challenge should proceed in Ireland, not Germany.

This week, however, another German decision turned the tables on the real names policy yet again. The Hamburg data protection authority upheld a complaint from someone who used a pseudonym on Facebook so as to separate her private and professional communications. The Hamburg DPA found against Facebook and held that it was not allowed unilaterally to change users’ chosen usernames to their real names. Nor was it entitled to demand official identification documents – an issue of particular relevance to child protection issues such as those arising in HL.

The Hamburg ruling is notable on a number of fronts. It exemplifies the tension between privacy – in all its nuanced forms – and other values. It illustrates the dilemmas bedevilling the business models of social media companies such as Facebook.

The case also highlights real challenges for the future of European data protection. The General Data Protection Regulation – currently clawing its way from draft to final form – aspires to harmonised pan-European standards. It includes a mechanism for data protection authorities to co-operate and resolve differences. But if authorities within the same country are prone to divergence on issues such as the real names policy, how optimistic can one be that regulators across the EU will sing from the same hymn sheet?

Important questions arise about data protection and multinational internet companies: in which country (or region, for that matter) should a user raise a complaint to a regulator? If they want to complain to a court, where do they do that? If a German user complains to an Irish regulator or court, to what extent do those authorities have to consider German law?

For the moment, Facebook clearly seeks home ground advantage. But its preference for the Irish forum was rejected by the Hamburg authority in this week’s ruling. He is reported as saying that “… Facebook cannot again argue that only Irish Data Protection law would be applicable … anyone who stands on our pitch also has to play our game”.

The draft Regulation has something to say on these matters, but is far from clear as to how to decide on the right pitch and the right rules for vital privacy battles like these.

Robin Hopkins @hopkinsrobin

Facebook, child protection and outsourced monitoring

July 22nd, 2015 by Robin Hopkins

Facebook is no stranger to complaints about the content of posts. Usually, one user complains to Facebook about what other users’ posts say about him. By making the offending posts available, Facebook is processing the complainant’s personal data, and must do so in compliance with data protection law.

More unusually, a user could also complain about their own Facebook posts. Surely a complainant cannot make data protection criticisms about information they deliberately posted about themselves? After all, Facebook processes those posts with the author’s consent, doesn’t it?

Generally, yes – but that will not necessarily be true in every instance, especially when it comes to Facebook posts by children. This is the nature of the complaint in striking litigation currently afoot before the High Court in Northern Ireland.

The case is HL v Facebook Inc, Facebook Ireland Ltd, the Northern Health & Social Care Trust and DCMS [2015] NIQB 61. It is currently only in its preliminary stages, but it raises very interesting and important issues about Facebook’s procedures for preventing underage users from utilising the social network. Those issues are illuminated in the recent judgment of Stephen J, who is no stranger to claims against Facebook – he heard the recent case of CG v Facebook [2015] NIQB 11, concerning posts about a convicted paedophile.

From the age of 11 onwards, HL maintained a Facebook page on which she made posts of an inappropriate sexual nature. She was exposed to responses from sexual predators. She says that Facebook is liable for its failure to prevent her from making these posts. She alleges that Facebook (i) unlawfully processed her sensitive personal data, (ii) facilitated her harassment by others, and (iii) was negligent in failing to have proper systems in place to minimise the risks of children setting up Facebook accounts by lying about their age.

The data protection claim raises a number of issues of great importance to the business of Facebook and others with comparable business models. One is the extent to which a child can validly consent to the processing of their personal data – especially sensitive personal data. Minors are (legitimately or not) increasingly active online, and consent is a cornerstone of online business. The consent issue is of one of wide application beyond the HL litigation.

A second issue is whether, in its processing of personal data, Facebook does enough to stop minors using their own personal data in ways which could harm them. In her claim, for example, HL refers to evidence given to a committee of the Australian Parliament – apparently by a senior privacy advisor to Facebook (though Facebook was unable to tell Stephens J who he was). That evidence apparently said that Facebook removes 20,000 under-age user profiles a day.

Stephens J was also referred to comments apparently made by a US Senator to Mark Zuckerberg about the vulnerability of underage Facebook users.

Another element of HL’s case concerns Facebook’s use of an outsourcing company called oDesk, operating for example from Morocco, to moderate complaints about Facebook posts. She calls into question the adequacy of these oversight measures: ‘where then is the oversight body for these underpaid global police?’ (to quote from a Telegraph article referred to in the recent HL judgment). Facebook says that – given its number of users in multiple languages across the globe – effective policing is a tall order (an argument J summed up at paragraph 22 as ‘the needle in a haystack argument, there is just too much to monitor, the task of dealing with underage users is impossible’).

In short, HL says that Facebook seems to be aware of the scale and seriousness of the problem of underage use of its network and has not done enough to tackle that problem.

Again, the issue is one of wider import for online multinationals for whom personal data is stock-in-trade.

The same goes for the third important data protection issue surfacing in the HL litigation. This concerns jurisdiction, cross-border data controllers and section 5 of the Data Protection Act 1998. For example, is Facebook Ireland established in the UK by having an office, branch or agency, and does it process the personal data in Facebook posts in the context of that establishment?

These issues are all still to be decided. Stephens J’s recent judgment in HL was not about the substantive issues, but about HL’s applications for specific discovery and interrogatories. He granted those applications. In addition to details of HL’s Facebook account usage, he ordered the Facebook defendants to disclose agreements between them and Facebook (UK) Ltd and between them and o-Desk (to whom some moderating processes were outsourced). He has also ordered the Facebook defendants to answer interrogatory questions about their procedures for preventing underage Facebook use.

In short, the HL litigation has – thus far – raised difficult data protection and privacy issues which are fundamental to Facebook’s business, and it has required Facebook to lay bare internal details of its safeguarding practices. The case is only just beginning. The substantive hearing, which is listed for next term, could groundbreaking.

Robin Hopkins @hopkinsrobin

Austria will not host Europe vs Facebook showdown

July 6th, 2015 by Robin Hopkins

As illustrated by Anya Proops’ recent post on a Hungarian case currently before the CJEU, the territorial jurisdiction of European data protection law can raise difficult questions.

Such questions have bitten hard in the Europe vs Facebook litigation. Max Schrems, an Austrian law graduate, is spearheading a massive class action in which some 25,000 Facebook users allege numerous data protection violations by the social media giant. Those include: unlawful obtaining of personal data (including via plug-ins and “like” buttons); invalid consent to Facebook’s processing of users’ personal data; use of personal data for impermissible purposes, including the unlawful analysing of data/profiling of users (“the Defendant analyses the data available on every user and tries to explore users’ interests, preferences and circumstances…”); unlawful sharing of personal data with third parties and third-party applications. The details of the claim are here.

Importantly, however, the claim is against Facebook Ireland Ltd, a subsidiary of the Californian-based Facebook Inc. The class action has been brought in Austria.

Facebook challenged the Austrian court’s jurisdiction. Last week, it received a judgment in its favour from the Viennese Regional Civil Court. The Court said it lacks jurisdiction in part because Mr Schrems is not deemed to be a ‘consumer’ of Facebook’s services. In part also, it lacks jurisdiction because Austria is not the right place to be bringing the claim. Facebook argued that the claim should be brought either in Ireland or in California, and the Court agreed.

Mr Schrems has announced his intention to appeal. In the meantime, the Austrian decision will continue to raise both eyebrows and questions, particularly given that a number of other judgments in recent years have seen European courts accepting jurisdiction to hear claims against social media companies (such as Google: see Vidal-Hall, for example) based elsewhere.

The Austrian decision also highlights the difficulties of the ‘one-stop shop’ principle which remains part of the draft Data Protection Regulation (albeit in more nuanced and complicated formulation than had earlier been proposed). In short, why should an Austrian user have to sue in Ireland?

Panopticon will report on any developments in this case in due course. It will also report on the other strand of Mr Schrems’ privacy campaign, namely his challenge to the lawfulness of the Safe Harbour regime for the transferring of personal data to the USA. That challenge has been heard by the CJEU, and the Advocate General’s opinion is imminent. The case will have major implications for those whose business involves transatlantic data transfers.

Robin Hopkins @hopkinsrobin

Facebook, FOI and children

August 6th, 2014 by Robin Hopkins

The Upper Tribunal has got its teeth into personal data disputes on a number of occasions in recent months – Edem was followed by Farrand, and now Surrey Heath Borough Council v IC and Morley [2014] UKUT 0330 (AAC): Morley UT decision. Panopticon reported on the first-instance Morley decision in 2012. In brief: Mr Morley asked for information about members of the local authority’s Youth Council who had provided input into a planning application. The local authority withheld the names of the Youth Councillors (who were minors) under s. 40(2) of FOAI (personal data). In a majority decision, the First-Tier Tribunal ordered that some of those names be disclosed, principally on the grounds that it seemed that they appeared on the Youth Council’s (closed) Facebook page.

The local authority and the ICO challenged that decision. The Upper Tribunal (Judge Jacobs) has agreed with them. He found the dissenting opinion of the First-Tier Tribunal member to have been the more sophisticated (as opposed to the overly generalised analysis of the majority) and ultimately correct. The Youth Councillors’ names were correctly withheld.

In his analysis of the First Data Protection Principle, Judge Jacobs was not much bothered by whether fairness or condition 6(1) (the relevant Schedule 2 condition) should be considered first: “the latter is but a specific instance of the former”.

Judge Jacobs found that there was no sufficient interest in the disclosure of the names of the Youth Councillors. He also rejected the argument that, by putting their names on the relevant Facebook page, the data subjects had implicitly consented to public disclosure of their identities in response to such a FOIA request.

Judge Jacobs stopped short, however, of finding that the personal data of minors should never be disclosed under FOIA, i.e. that the (privacy) interests of children would always take precedence over transparency. Maturity and autonomy matter more than mere age in this context, and sometimes (as here) minors are afforded substantial scope to make their own decisions.

Morley is an important case on the intersection between children’s personal data and transparency, particularly in the social media context, but – as Judge Jacobs himself observed – “it is by no means the last word on the subject”.

There were 11KBW appearances by Joseph Barrett (for the local authority) and Heather Emmerson (for the ICO).

Robin Hopkins @hopkinsrobin

Facebook fan pages: data protection buck stops with Facebook, not page owners

October 22nd, 2013 by Robin Hopkins

In Re Facebook, VG, Nos. 8 A 37/12, 8 A 14/12, 8 A 218/11, 10/9/13 the Schleswig-Holstein Administrative Court has allowed Facebook’s appeals against rulings of the regional data protection authority (the ULD), Thilo Weichert.

The case involved a number of companies’ use of Facebook fan pages. The ULD’s view was that Facebook breached German privacy law, including through its use of cookies, facial recognition and other data processing. He considered that, by using Facebook fan pages, the companies were facilitating Facebook’s violations by processing users’ personal data on those pages. He ordered them to shut down the fan pages or face fines of up to €50,000.

The appellant companies argued that they could not be held responsible for data protection violations (if any) allegedly committed by Facebook, as they had no control over how that data on the pages was processed and used by the social networking site. The Administrative Court agreed.

The case raises interesting questions about where the buck stops in terms of data processing – both in terms of who controls the processing, and in terms of where they are based. Facebook is based in Ireland, without a substantive operational presence in Germany. Earlier this year, the Administrative Court found – again against the Schleswig-Holstein ULD’s ruling – that Facebook’s ‘real names’ policy (i.e. a ban on pseudonymised profiles) was a matter for Irish rather than German law.

The ULD is unlikely to be impressed by the latest judgment, given that he is reported as having said in 2011 that:

“We see a much bigger privacy issue behind the Facebook case: the main business model of Google, Apple, Amazon and others is based on privacy law infringements. This is the reason why Facebook and all the other global internet players are so reluctant in complying with privacy law: they would lose their main profit resource.”

For more on this story, see links here and here.

Robin Hopkins

Privacy and data protection developments in 2013: Google, Facebook, Leveson and more

March 11th, 2013 by Robin Hopkins

Data protection law was designed to be a fundamental and concrete dimension of the individual’s right to privacy, the primary safeguard against misuse of personal information. Given those ambitions, it is surprisingly rarely litigated in the UK. It also attracts criticism as imposing burdensome bureaucracy but delivering little in the way of tangible protection in a digital age. Arguably then, data protection law has tended to punch below its weight. There are a number of reasons for this.

One is that Directive 95/46/EC, the bedrock of data protection laws in the European Union, is the product of a largely pre-digital world; its drafters can scarcely have imagined the ubiquity of Google, Twitter, Facebook and the like.

Another is that in the UK, the evolution of Article 8 ECHR and common law privacy and breach of confidence actions has tended to deprive the Data Protection Act 1998 of the oxygen of litigation – before the House of Lords in Campbell v MGN [2004] UKHL 22, for example, it was agreed that the DPA cause of action “added nothing” to the supermodel’s breach of confidence claim (para. 130).

A further factor is that the DPA 1998 has historically lacked teeth: a court’s discretion to enforce subject access rights under s. 7(9) is “general and untrammelled” (Durant v FSA [2003] EWCA Civ 1746 at para. 74); damages under s. 13 can only be awarded if financial loss has been incurred, and the Information Commissioner has, until recently, lacked robust enforcement powers.

This landscape is, however, undergoing very significant changes which (one hopes) will improve data protection’s fitness for purpose and amplify its contribution to privacy law. Here is an overview of some of the more notable developments so far in 2013.

The draft Data Protection Regulation

The most fundamental feature of this landscape is of course EU law. The draft DP Regulation, paired with a draft Directive tailored to the crime and security contexts, was leaked in December 2011 and published in January 2012 (see Panopticon’s analysis here). The draft Regulation, unlike its predecessor would be directly effective and therefore not dependent on implementation through member states’ domestic legislation. Its overarching aim is harmonisation of data protection standards across the EU: it includes a mechanism for achieving consistency, and a ‘one-stop shop’ regulatory approach (i.e. multinationals are answerable only to their ‘home’ data protection authority). It also tweaks the law on international data transfers, proposes that most organisations have designated data protection officers, offers individuals a ‘right to be forgotten’ and proposes eye-watering monetary penalties for data protection breaches.

Negotiations on that draft Regulation are in full swing: the European Parliament and the Council of the European Union’s DAPIX (Data Protection and Information Exchange) subgroup working on their recommendations separately before coming together to approve the final text (for more detail on the process, see the ICO’s outline here).

What changes, if any, should be made to the draft before it is finalised? That rather depends on who you ask.

In January 2013, the UK government set out its views on the draft Regulation. It did so in the form of its response to the recommendations of the Justice Select Committee following the latter’s examination of the draft Regulation. This is effectively the government’s current negotiation stance at the EU table. It opposes direct effect (i.e. it wants a directive rather than a regulation), thinks the ‘right to be forgotten’ as drafted is misconceived, favours charging for subject access requests and opposes the mandatory data protection officer requirement. The government considers that promoters of the draft have substantially overestimated the savings which the draft would deliver to business. The government also “believes that the supervisory authorities should have more discretion in the imposition of fines and that the proposed removal of discretion, combined with the higher levels of fines, could create an overly risk-averse environment for data controllers”. For more on its stance, see here.

The ICO has also has significant concerns. It opposes the two-stream approach (a mainstream Regulation and a crime-focused Directive) and seeks clarity on psedonymised data and non-obvious identifiers such as logs of IP addresses. It thinks the EU needs to be realistic about a ‘right to be forgotten’ and about its power over non-EU data controllers. It considers the current proposal to be “too prescriptive in terms of its administrative detail” and unduly burdensome for small and medium-sized enterprises in particular.

Interestingly, while the ICO favours consistency in terms of sanctions, it cautions against total harmonisation on all fronts: “Different Member States have different legal traditions. What is allowed by law is not spelled out in the UK in the way that it is in some other countries’ legal systems. The proposed legislation needs to reflect this, particularly in relation to the concept of ‘legitimate interests’.” For more on the ICO’s current thinking, see here.

Those then are the most influential UK perspectives. At an EU level, the European Parliament’s report on the draft Regulation is more wholeheartedly supportive. The European Parliament’s Industry Committee is somewhat more business-friendly in its focus, emphasising the importance of EU-wide consistency and a ‘one-stop shop’. Its message is clear: business needs certainty on data protection requirements. It also urges further exemptions from data protection duties for small and medium-sized enterprises “which are the backbone of Europe’s economy”. The Industry Committee’s views are available here.

Negotiations continue, the aim being to finalise the text by mid-2013. The European Parliament is likely to press for the final text to resemble the draft very closely. On the other hand, Ireland holds the Presidency of the Commission and of DAPIX – until mid-2013. Its perspective is probably closer to the UK ICO’s in tenor. There are good prospects of at least some of their views to be reflected in the final draft.

A number of the themes of the draft Regulation and the current negotiations are already surfacing in litigation, as explained below.

The Leveson Report

Data protection legislation in the UK will be affected not only by EU developments but by domestic ones too.

In recent weeks, debate about Leveson LJ’s report on the culture, practices and ethics of the press has tended to focus on the Defamation Bill which is currently scraping its way through Parliament. In particular, the debate concerns the merits of an apparently-Leveson inspired amendment tabled by Lord Puttnam which, some argue, threatens to derail this legislative overhaul of libel law in the UK (for one angle on this issue, see David Allen Green’s piece in the New Statesman here).

The Leveson Report also included a number of recommendations for changes to the DPA 1998 (see Panopticon’s posts here and here). These included overhauling and expanding the reach of the ICO and allowing courts to award damages even where no financial loss has been suffered (arguably a befitting change to a regime concerned at heart with personal privacy).

The thorniest of Leveson LJ’s DPA recommendations, however, concerned the wide-ranging ‘journalism exemption’ provided by s. 32. The ICO has begun work on a code of practice on the scope and meaning of this exemption. It has conducted a ‘framework consultation’, i.e. one seeking views on the questions to be addressed by the code, rather than the answers at this stage (further consultation will happen once a code has been drafted).

There is potential for this code to exert great influence: s. 32(3) says that in considering whether “the belief of a data controller that publication would be in the public interest was or is a reasonable one, regard may be had to his compliance with” any relevant code of practice – if it has been designated by order of the Secretary of State for this purpose. There is as yet no indication of an appetite for such designation, but it is hoped that, the wiser the code, the stronger the impetus to designate it.

The ICO’s framework consultation closes on 15 March. Watch out for (and respond to) the full consultation in due course.

Google – confidentiality, informed consent and data-sharing

Google (the closest current thing to a real ‘panopticon’?) has been the subject of a flurry of important recent developments.

First, certain EU data protection bodies intend to take “repressive action” against some of Google’s personal data practices. These bodies include the French authority, CNIL (the Commission nationale de l’informatique et des libertés) and the Article 29 Working Party (an advisory body made of data protection representatives from member states). In October 2012, following an investigation led by CNIL, the Working Party raised what it saw as deficiencies in Google’s confidentiality rules. It recommended, for example, that Google provide users with clearer information on issues such as how personal data is shared across Google’s services, and on Google’s retention periods for personal data. Google was asked to respond within four months. CNIL has reported in recent weeks that Google did not respond. The next step is for the Working Party “to set up a working group, led by the CNIL, in order to coordinate their repressive action which should take place before summer”. It is not clear what type of “repressive action” is envisaged.

Google and the ‘right to be forgotten’

Second, Google is currently involved in litigation against the Spanish data protection authority in the Court of Justice of the EU. The case arises out of complaints made to that authority by a number of Spanish citizens whose names, when Googled, generated results linking them to false, inaccurate or out-of-date information (contrary to the data protection principles) – for example an old story mentioning a surgeon’s being charged with criminal negligence, without mentioning that he had been acquitted. The Spanish authority ordered Google to remove the offending entries. Google challenged this order, arguing that it was for the authors or publishers of those websites to remedy such matters. The case was referred to the CJEU by the Spanish courts. The questions referred are available here.

The CJEU considered the case at the end of February, with judgment expected in mid-2013. The case is obviously of enormous relevance to Google’s business model (at least as regards the EU). Also, while much has been made about the ‘right to be forgotten’ codified in the draft EU Regulation (see above), this Google case is effectively about whether that right exists under the current law. For a Google perspective on these issues, see this blog post.

Another development closer to home touches on similar issues. The Court of Appeal gave judgment last month in Tamiz v Google [2013] EWCA Civ 68. Mr Tamiz complained to Google about comments on the ‘London Muslim’ blog (hosted by Google) which he contended were defamatory in nature. He asked Google to remove that blog. He also sought permission to serve proceedings on Google in California for defamation occurring between his request to Google and the taking down of the offending blog. Agreeing with Google, the Court of Appeal declined jurisdiction and permission to serve on Google in California.

Mr Tamiz’ case failed on the facts: given the small number of people who would have viewed this blog post in the relevant period, the extra-territorial proceedings ‘would not be worth the candle’.

The important points for present purposes, however, are these: the Court of Appeal held that there was an arguable case that Google was the ‘publisher’ of those statements for defamation purposes, and that it would not have an unassailable defence under s. 1 of the Defamation Act 1996. Google provided the blogging platform subject to conditions and had the power to block or remove content published in breach of those conditions. Following Mr Tamiz’s complaint, Google knew or ought to have known that it was causing or contributing to the ongoing publication of the offending material.

A ‘publisher’ for defamation purposes is not co-extensive with a ‘data controller’ for DPA purposes. Nonetheless, these issues in Tamiz resonate with those in the Google Spain case, and not just because of their ‘right to be forgotten’ subtext. Both cases raise this question: it is right to hold Google to account for its role in making false, inaccurate or misleading personal information available to members of the public? If it is, another question might also arise in due course: to what extent would Leveson-inspired amendments to the s. 32 DPA 1998 exemption (on which the ICO is consulting) affect service providers like Google?

Facebook, Google and jurisdiction

The Google Spain case also involves an important jurisdictional argument. Google’s headquarters are in California. It argued before the CJEU that Google Spain only sells advertising to the parent company, and that these complaints should therefore be considered under US data protection legislation. In other words, it argues, this is not a matter for EU data protection law at all. The Spanish authority argues that Google Spain’s ‘centre of gravity’ is in Spain: it links to Spanish websites, has a Spanish domain name and processes personal data about Spanish citizens and residents.

Victory for Google on this point would significantly curtail the data protection rights of EU citizens in this context.

Also on jurisdictional matters, Facebook has won an important recent victory in Germany. Schleswig-Holstein’s Data Protection Commissioner had ruled that Facebook’s ‘real names policy’ (i.e. its policy against accounts in psuedonymous names only) was unfair and unlawful. The German administrative court granted Facebook’s application for the suspension of that order on the grounds that the issue should instead be considered by the Irish Data Protection Authority, since Facebook is Dublin-based.

Here then, is an example of ‘one-stop shop’ arguments surfacing under current EU law. The ‘one-stop shop’ principle is clearly very important to businesses. In the Facebook case, it would no doubt say that its ‘home’ regulator understands its business much better and is therefore best equipped to assess the lawfulness of its practices. The future of EU law, however, is as much about consistency across member states as about offering a ‘one-stop shop’. The tension between ‘home ground advantage’ and EU-wide consistency is one of the more interesting practical issues in the current data protection debate.

Enforcement and penalties issued by the ICO

One of the most striking developments in UK data protection law in recent years has been the ICO’s use of its enforcement and (relatively new) monetary penalty powers.

On the enforcement front, the Tribunal has upheld the ICO’s groundbreaking notice issued against Southampton City Council for imposing audio recording requirements in taxis (see Panopticon’s post here).

The issuing of monetary penalties has continued apace, with the ICO having issued in the region of 30 notices in the last two years. In 2013, two have been issued.

One (£150,000) was on the Nursing and Midwifery Council, for losing three unencrypted DVDs relating to a nurse’s misconduct hearing, which included evidence from two vulnerable children. The second (£250,000) was on a private sector firm, Sony Computer Entertainment Europe Limited, following the hacking of Sony’s PlayStation Network Platform in April 2011, which the ICO considered “compromise[ed] the personal information of millions of customers, including their names, addresses, email addresses, dates of birth and account passwords. Customers’ payment card details were also at risk.”

In the only decision of its kind to date, the First-Tier Tribunal upheld a monetary penalty notice issued against Central London Community Care NHS Trust for faxing patient details to the wrong number (see Panopticon’s post here). The First-Tier Tribunal refused the Trust permission to appeal against that decision.

Other penalty notices are being appealed to the Tribunal – these include the Scottish Borders notice (which the Tribunal will consider next week) and the Tetrus Telecoms notice, the first to be issued under the Privacy and Electronic Communications Regulations 2003.

It is only a matter of time before the Upper Tribunal or a higher court considers a monetary penalty notice case. At present, however, there is no binding case law. To that extent, the monetary penalty system is a somewhat uncertain business.

The question of EU-wide consistency raises more fundamental uncertainty, especially when one considers the mandatory fining regime proposed in the draft EU Regulation, with fines of up to €1,000,000 or 2% of the data controller’s global annual turnover.

By way of contrast, 13 administrative sanctions for data protection breaches were issued in France in 2012, the highest fine being €20,000. Enforcement in Germany happens at a regional level, with Schleswig-Holstein regarded as on the stricter end; overall however, few fines are issued in Germany. How the ‘one-stop shop’ principle, the consistency mechanism and the proposed new fining regime will be reconciled is at present anyone’s guess.

From a UK perspective, however, the only point of certainty as regards monetary penalty notices is that there will be no slowing down in the ICO’s consideration of such cases in the short- to medium-term.

It is of course too early to say whether the developments outlined above will elevate data protection law from a supporting to a leading role in protecting privacy. It is clear, however, that – love them or hate them – data protection duties are increasingly relevant and demanding.

Robin Hopkins