GCHQ’s internet surveillance – privacy and free expression join forces

July 3rd, 2014 by Robin Hopkins

A year ago, I blogged about Privacy International’s legal challenge – alongside Liberty – against GCHQ, the Security Services and others concerning the Prism/Tempora programmes which came to public attention following Edward Snowden’s whistleblowing. That case is now before the Investigatory Powers Tribunal. It will be heard for 5 days, commencing on 14 July.

Privacy International has also brought a second claim against GCHQ: in May 2014, it issued proceedings concerning the use of ‘hacking’ tools and software by intelligence services.

It has been announced this week that Privacy International is party to a third challenge which has been filed with the Investigatory Powers Tribunal. This time, the claim is being brought alongside 7 internet service providers: GreenNet (UK), Chaos Computer Club (Germany); GreenHost (Netherlands); Jimbonet (Korea), Mango (Zimbabwe), May First/People Link (US) and Riseup (US).

The claim is interesting on a number of fronts. One is the interplay between global reach (see the diversity of the claimants’ homes) and this specific legal jurisdiction (the target is GCHQ and the jurisdiction is the UK – as opposed, for example, to bringing claims in the US). Another is that it sees private companies – and therefore Article 1 Protocol 1 ECHR issues about property, business goodwill and the like – surfacing in the UK’s internet surveillance debate.

Also, the privacy rights not only of ‘ordinary’ citizens (network users) but also specifically those of the claimants’ employees are being raised.

Finally, this claim sees the right to free expression under Article 10 ECHR – conspicuously absent, for example, in the Google Spain judgment – flexing its muscle in the surveillance context. Privacy and free expression rights are so often in tension, but here they make common cause.

The claims are as follows (quoting from the claimants’ press releases):

(1) By interfering with network assets and computers belonging to the network providers, GCHQ has contravened the UK Computer Misuse Act and Article 1 of the First Additional Protocol (A1AP) of the European Convention of Human Rights (ECHR), which guarantees the individual’s peaceful enjoyment of their possessions

(2) Conducting surveillance of the network providers’ employees is in contravention of Article 8 ECHR (the right to privacy) and Article 10 ECHR (freedom of expression)

(3) Surveillance of the network providers’ users that is made possible by exploitation of their internet infrastructure, is in contravention of Arts. 8 and 10 ECHR; and

(4) By diluting the network providers’ goodwill and relationship with their users, GCHQ has contravened A1AP ECHR.

Robin Hopkins @hopkinsrobin

Privacy, electronic communications and monetary penalties: new Upper Tribunal decision

June 12th, 2014 by Robin Hopkins

Panopticon reported late last year that the First-Tier Tribunal overturned the first monetary penalty notice issued by the Information Commissioner for breaches of the Privacy and Electronic Communications Regulations 2003. This was the decision in Niebel v IC (EA/2012/0260).

The Information Commissioner appealed against that decision. The Upper Tribunal gave its decision on the appeal yesterday: see here IC v Niebel GIA 177 2014. It dismissed the Commissioner’s appeal and upheld the First-Tier Tribunal’s cancellation of the £300,000 penalty imposed for the sending of marketing text messages.

I appeared in this case, as did James Cornwell (also of the Panopticon fold), so I will not be offering an analysis of the case just now. With any luck, one of my colleagues will be cajoled into doing so before too long.

It is worth pointing out simply that this is the first binding decision on the meaning of the various limbs of s. 55A of the DPA 1998, which contains the preconditions for the issuing of a monetary penalty notice.

Robin Hopkins @hopkinsrobin

Google Spain and the CJEU judgment it would probably like to forget.

May 19th, 2014 by Akhlaq Choudhury

In the landmark judgment in Google Spain SL and Google Inc., v Agencia Espanola de Proteccion de Datos, Gonzales (13th May 2014), the CJEU found that Google is a data controller and is engaged in processing personal data within the meaning of Directive 95/46 whenever an internet search about an individual results in the presentation of information about that individual with links to third party websites.  The judgment contains several findings which fundamentally affect the approach to data protection in the context of internet searches, and which may have far-reaching implications for search engine operators as well as other websites which collate and present data about individuals.

The case was brought Mr Costeja Gonzales, who was unhappy that two newspaper reports of a 16-year old repossession order against him for the recovery of social security debts would come up whenever a Google search was performed against his name. He requested both the newspaper and Google Spain or Google Inc. to remove or conceal the link to the reports on the basis that the matter had long since been resolved and was now entirely irrelevant. The Spanish Data Protection Agency rejected his complaint against the newspaper on the basis that publication was legally justified. However, his complaint against Google was upheld. Google took the matter to court, which made a reference to the CJEU.

The first question for the CJEU was whether Google was a data controller for the purposes of Directive 95/46. Going against the opinion of the Advocate General (see earlier post), the Court held that the collation, retrieval, storage, organisation and disclosure of data undertaken by a search engine when a search is performed amounted to “processing” within the meaning of the Directive; and that as Google determined the purpose and means of that processing, it was indeed the controller. This is so regardless of the fact that such data is already published on the internet and is not altered by Google in any way.

 The Court went on to find that the activity of search engines makes it easy for any internet user to obtain a structured overview of the information available about an individual thereby enabling them to establish a detailed profile of that person involving a vast number of aspects of his private life.  This entails a significant interference with rights to privacy and to data protection, which could not be justified by the economic interests of the search engine operator.  In a further remark that will send shockwaves through many commercial operators providing search services, it was said that as a “general rule” the data subject’s rights in this regard will override “not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name” (at paras 81 and 97). Exceptions would exist, e.g. for those in public life where the “the interference with…fundamental rights is justified by the preponderant interest of the general public in having…access to the information in question”.

However, the Court did not stop there with a mere declaration about interference. Given the serious nature of the interference with privacy and data protection rights, the Court said that search engines like Google could be required by a data subject to remove links to websites containing information about that person, even without requiring simultaneous deletion from those websites.

Furthermore, the CJEU lent support to the “right to be forgotten” by holding that the operator of a search engine could be required to delete links to websites containing a person’s information. The reports about Mr Costejas Gonzales’s financial difficulties in 1998 were no longer relevant having regard to his right to private life and the time that had elapsed, and he had therefore established the right to require Google to remove links to the relevant reports from the list of search results against his name. In so doing, he did not even have to establish that the publication caused him any particular prejudice.

The decision clearly has huge implications, not just for search engine operators like Google, but also other operators providing web-based personal data search services. Expect further posts in coming days considering some of the issues arising from the judgment.

Akhlaq Choudhury

Interfering with the fundamental rights of practically the entire European population

April 10th, 2014 by Robin Hopkins

In the Digital Rights Ireland case, the Grand Chamber of the CJEU has this week declared invalid the 2006 Directive which provides for the mass retention – and disclosure to policing and security authorities – of individuals’ online traffic data. It found this regime to be a disproportionate interference with privacy rights. Depending on your perspective, this is a major step forward for digital privacy, or a major step backwards in countering terrorism and serious crime. It probably introduces even more uncertainty in terms of the wider project of data protection reform at the EU level. Here is my synopsis of this week’s Grand Chamber judgment.

Digital privacy vs national security: a brief history

There is an overlapping mesh of rights under European law which aims to protect citizens’ rights with respect to their personal data – an increasingly important strand of the broader right to privacy. The Data Protection Directive (95/46/EC) was passed in 1995, when the internet was in its infancy. It provides that personal data must be processed (obtained, held, used, disclosed) fairly and lawfully, securely, for legitimate purposes and so on.

Then, as the web began to mature into a fundamental aspect of everyday life, a supplementary Directive was passed in 2002 (2002/58/EC) on privacy and electronic communications. It is about privacy, confidentiality and the free movement of electronic personal data in particular.

In the first decade of the 21st century, however, security objectives became increasingly urgent. Following the London bomings of 2005 in particular, the monitoring of would-be criminals’ web activity was felt to be vital to effective counter-terrorism and law enforcement. The digital confidentiality agenda needed to make space for a measure of state surveillance.

This is how Directive 2006/24 came to be. In a nutshell, it provides for traffic and location data (rather than content-related information) about individuals’ online activity to be retained by communications providers and made available to policing and security bodies. This data was to be held for a minimum of six months and a maximum of 24 months.

That Directive – like all others – is however subject to the EU’s Charter of Fundamental Rights. Article 7 of that Charter enshrines the right to respect for one’s private and family life, home and communications. Article 8 is about the right to the protection and fair processing of one’s personal data.

Privacy and Digital Rights Ireland prevail

Digital Rights Ireland took the view that the 2006 Directive was not compatible with those fundamental rights. It asked the Irish Courts to refer this to the CJEU. Similar references were made during different litigation before the Austrian Courts.

The CJEU gave its answer this week. In Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources and Others (C‑293/12) joined with Kärntner Landesregierung and Others (C‑594/12), the Grand Chamber held the 2006 Directive to be invalid on the grounds of its incompatibility with fundamental privacy rights.

The Grand Chamber accepted that, while privacy rights were interfered with, this was in pursuit of compelling social objectives (the combatting of terrorism and serious crime). The question was one of proportionality. Given that fundamental rights were being interfered with, the Courts would allow the European legislature little lee-way: anxious scrutiny would be applied.

Here, in no particular order, are some of the reasons why the 2006 Directive failed its anxious scrutiny test (quotations are all from the Grand Chamber’s judgment). Unsurprisingly, this reads rather like a privacy impact assessment which data controllers are habitually called upon to conduct.

The seriousness of the privacy impact

First, consider the nature of the data which, under Articles 3 and 5 the 2006 Directive, must be retained and made available. “Those data make it possible, in particular, to know the identity of the person with whom a subscriber or registered user has communicated and by what means, and to identify the time of the communication as well as the place from which that communication took place. They also make it possible to know the frequency of the communications of the subscriber or registered user with certain persons during a given period.”

This makes for a serious incursion into privacy: “Those data, taken as a whole, may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained, such as the habits of everyday life, permanent or temporary places of residence, daily or other movements, the activities carried out, the social relationships of those persons and the social environments frequented by them.”

Second, consider the volume of data gathered and the number of people affected. Given the ubiquity of internet communications, the 206 Directive “entails an interference with the fundamental rights of practically the entire European population”.

Admittedly, the 2006 regime does not undermine “the essence” of data protection rights (because it is confined to traffic data – the contents of communications are not retained), and is still subject to data security rules (see the seventh data protection principle under the UK’s DPA 1998).

Nonetheless, this is a serious interference with privacy rights. It has objective and subjective impact: “it is wide-ranging, and it must be considered to be particularly serious… the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance.”

Such a law, said the Grand Chamber, can only be proportionate if it includes clear and precise laws governing the scope of the measures and providing minimum safeguards for individual rights. The 2006 Directive fell short of those tests.

Inadequate rules, boundaries and safeguards

The regime has no boundaries, in terms of affected individuals: it “applies even to persons for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious crime”.

It also makes no exception for “persons whose communications are subject, according to rules of national law, to the obligation of professional secrecy”.

There are no sufficiently specific limits on the circumstances in which this can be accessed by security bodies, on the purposes to which that data can be put by those bodies, or the persons with whom those particular bodies may share the data.

There are no adequate procedural safeguards: no court or administrative authority is required to sign off the transfers.

There are also no objective criteria for justifying the retention period of 6-24 months.

The Grand Chamber’s conclusion

In summary, the Grand Chamber found that “in the first place, Article 7 of Directive 2006/24 does not lay down rules which are specific and adapted to (i) the vast quantity of data whose retention is required by that directive, (ii) the sensitive nature of that data and (iii) the risk of unlawful access to that data, rules which would serve, in particular, to govern the protection and security of the data in question in a clear and strict manner in order to ensure their full integrity and confidentiality. Furthermore, a specific obligation on Member States to establish such rules has also not been laid down…”

There was also an international transfer aspect to its concern: “in the second place, it should be added that that directive does not require the data in question to be retained within the European Union…”

This last point is of course highly relevant to another of the stand-offs between digital privacy and national security which looms in UK litigation, namely the post-Snowden litigation against security bodies.

Robin Hopkins @hopkinsrobin

The Google/Safari users case: a potential revolution in DPA litigation?

January 16th, 2014 by Robin Hopkins

I posted earlier on Tugendhat J’s judgment this morning in Vidal-Hall and Others v Google Inc [2014] EWHC 13 (QB). The judgment is now available here – thanks as ever to Bailii.

This is what the case is about: a group of claimants say that, by tracking and collating information relating to their internet usage on the Apple Safari browser without their consent, Google (a) misused their private information (b) breached their confidences, and (c) breached its duties under the Data Protection Act 1998 – in particular, under the first, second, sixth and seventh data protection principles. They sought damages and injunctive relief.

As regards damages, “what they claim damages for is the damage they suffered by reason of the fact that the information collected from their devices was used to generate advertisements which were displayed on their screens. These were targeted to their apparent interests (as deduced from the information collected from the devices they used). The advertisements that they saw disclosed information about themselves. This was, or might have been, disclosed also to other persons who either had viewed, or might have viewed, these same advertisements on the screen of each Claimant’s device” (paragraph 24).

It is important to note that “what each of the Claimants claims in the present case is that they have suffered acute distress and anxiety. None of them claims any financial or special damage. And none of them claims that any third party, who may have had sight of the screen of a device used by them, in fact thereby discovered information about that Claimant which was detrimental” (paragraph 25).

The Claimants needed permission to serve proceedings on the US-based Google. They got permission and served their claim forms. Google then sought to have that service nullified, by seeking an order declaring that the English court has no jurisdiction to try these particular claims (i.e. it was not saying that it could never be sued in the English courts).

Tugendhat J disagreed – as things stand, the claims will now progress before the High Court (although Google says it intends to appeal).

Today’s judgment focused in part on construction of the CPR rules about service outside of this jurisdiction. I wanted to highlight some of the other points.

One of the issues was whether the breach of confidence and misuse of private information claims were “torts”. Tugendhat J said this of the approach: “Judges commonly adopt one or both of two approaches to resolving issues as to the meaning of a legal term, in this case the word “tort”. One approach is to look back to the history or evolution of the disputed term. The other is to look forward to the legislative purpose of the rule in which the disputed word appears”. Having looked to the history, he observed that “history does not determine identity. The fact that dogs evolved from wolves does not mean that dogs are wolves”.

The outcome (paragraphs 68-71): misuse of private information is a tort (and the oft-cited proposition that “the tort of invasion of privacy is unknown in English law” needs revisiting) but breach of confidence is not (given Kitetechnology BV v Unicor GmbH Plastmaschinen [1995] FSR 765).

Google also objected to the DPA claims being heard. This was partly because they were raised late; this objection was dismissed.

Google also said that, based on Johnson v MDU [2007] EWCA Civ 262; (2007) 96 BMLR 99, financial loss was required before damages under section 13 of the DPA could be awarded. Here, the Claimants alleged no financial loss. The Claimants argued against the Johnson proposition: they relied on Copland v UK 62617/00 [2007] ECHR 253, argued for a construction of the DPA that accords with Directive 95/46/EC as regards relief, and argued that – unlike in Johnson – this was a case in which their Article 8 ECHR rights were engaged. Tugendhat J has allowed this to proceed to trial, where it will be determined: “This is a controversial question of law in a developing area, and it is desirable that the facts should be found”.

If the Johnson approach is overturned – i.e. if the requirement for financial loss is dispensed with, at least for some types of DPA claim – then this could revolutionise data protection litigation in the UK. Claims under section 13 could be brought without claimants having suffered financially due to the alleged DPA breaches they have suffered.

Tugendhat went on to find that there were sufficiently serious issues to be tried here so as to justify service out of the jurisdiction – it could not be said that they were “not worth the candle”.

Further, there was an arguable case that the underlying information was, contrary to Google’s case, “private” and that it constituted “personal data” for DPA purposes (Google say the ‘identification’ limb of that definition is not met here).

Tugendhat was also satisfied that this jurisdiction was “clearly the appropriate one” (paragraph 134). He accepted the argument of Hugh Tomlinson QC (for the Claimants) that “in the world in which Google Inc operates, the location of documents is likely to be insignificant, since they are likely to be in electronic form, accessible from anywhere in the world”.

Subject to an appeal from Google, the claims will proceed in the UK. Allegations about Google’s conduct in other countries are unlikely to feature. Tugendhat J indicated a focus on what Google has done in the UK, to these individuals: “I think it very unlikely that a court would permit the Claimants in this case to adduce evidence of what Mr Tench refers to as alleged wrongdoing by Google Inc against other individuals, in particular given that it occurred in other parts of the world, governed by laws other than the law of England” (paragraph 47).

Robin Hopkins @hopkinsrobin

High Court to hear Safari users’ privacy claim against Google

January 16th, 2014 by Robin Hopkins

Panopticon has from time to reported on Google’s jurisdictional argument when faced with privacy/data protection actions in European countries: it tends to argue that such claims should be dismissed and must be brought in California instead. This argument is not always successful.

The same jurisdictional argument was advanced before Mr Justice Tugendhat in response to a claim brought by a group calling itself ‘Safari Users Against Google’s Secret Tracking’ who, as their name suggests, complain that Google unlawfully gathers data from Safari browser usage.

This morning, Mr Justice Tugendhat dismissed that jurisdictional argument. The case can be heard in the UK. Matthew Sparkes reports in the Daily Telegraph that the judge said “I am satisfied that there is a serious issue to be tried in each of the claimant’s claims for misuse of private information” and that “the claimants have clearly established that this jurisdiction is the appropriate one in which to try each of the above claims”.

The same article says that Google will appeal. This follows Google’s announcement yesterday that it will appeal a substantial fine issued by the French data protection authority for unlawful processing (gathering and storing) of user data.

Panopticon will continue to gather data on these and other Google-related matters.

Robin Hopkins @hopkinsrobin

Facebook fan pages: data protection buck stops with Facebook, not page owners

October 22nd, 2013 by Robin Hopkins

In Re Facebook, VG, Nos. 8 A 37/12, 8 A 14/12, 8 A 218/11, 10/9/13 the Schleswig-Holstein Administrative Court has allowed Facebook’s appeals against rulings of the regional data protection authority (the ULD), Thilo Weichert.

The case involved a number of companies’ use of Facebook fan pages. The ULD’s view was that Facebook breached German privacy law, including through its use of cookies, facial recognition and other data processing. He considered that, by using Facebook fan pages, the companies were facilitating Facebook’s violations by processing users’ personal data on those pages. He ordered them to shut down the fan pages or face fines of up to €50,000.

The appellant companies argued that they could not be held responsible for data protection violations (if any) allegedly committed by Facebook, as they had no control over how that data on the pages was processed and used by the social networking site. The Administrative Court agreed.

The case raises interesting questions about where the buck stops in terms of data processing – both in terms of who controls the processing, and in terms of where they are based. Facebook is based in Ireland, without a substantive operational presence in Germany. Earlier this year, the Administrative Court found – again against the Schleswig-Holstein ULD’s ruling – that Facebook’s ‘real names’ policy (i.e. a ban on pseudonymised profiles) was a matter for Irish rather than German law.

The ULD is unlikely to be impressed by the latest judgment, given that he is reported as having said in 2011 that:

“We see a much bigger privacy issue behind the Facebook case: the main business model of Google, Apple, Amazon and others is based on privacy law infringements. This is the reason why Facebook and all the other global internet players are so reluctant in complying with privacy law: they would lose their main profit resource.”

For more on this story, see links here and here.

Robin Hopkins

Fingerprints requirement for passport does not infringe data protection rights

October 22nd, 2013 by Robin Hopkins

Mr Schwarz applied to his regional authority, the city of Bochum, for a passport. He was required to submit a photograph and fingerprints. He did not like the fingerprint part. He considered it unduly invasive. He refused. So Bochum refused to give him a passport. He asked the court to order it to give him one. The court referred to the Court of Justice of the European Union questions about whether the requirement to submit fingerprints in addition to photographs complied with the Data Protection Directive 95/46/EC.

Last week, the Fourth Chamber of the CJEU gave its judgment: the requirement is data protection-compliant.

The requirement had a legal basis, namely Article 1(2) of Council Regulation 2252/2004, which set down minimum security standards for identity-confirmation purposes in passports.

This pursued a legitimate aim, namely preventing illegal entry into the EU.

Moreover, while the requirements entailed the processing of personal data and an interference with privacy rights, the ‘minimum security standards’ rules continued to “respect the essence” of the individual’s right to privacy.

The fingerprint requirement was proportionate because while the underlying technology is not 100% successful in fraud-detection terms, it works well enough. The only real alternative as an identity-verifier is an iris scan, which is no less intrusive and is technologically less robust. The taking of fingerprints is not very intrusive or intimate – it is comparable to having a photograph taken for official purposes, which people don’t tend to complain about when it comes to passports.

Importantly, the underlying Regulation provided that the fingerprints could only be used for identity-verification purposes and that there would be no central database of fingerprints (instead, each set is stored only in the passport).

This is all common-sense stuff in terms of data protection compliance. Data controllers take heart!

Robin Hopkins

Refusal to destroy part of a ‘life story’ justified under Article 8(2) ECHR

October 4th, 2013 by Robin Hopkins

The High Court of Justice (Northern Ireland) has today given judgment In the matter of JR60’s application for judicial review [2013] NIQB 93. The applicant sought to challenge the right of the two Social Care Trusts to keep and use various records generated when she was a resident of children’s homes and a training school between the years 1978-1991.

In most cases of challenges to the retention of records, the applicant seeks to expunge information which suggests they have done wrong. This application is interesting because it focused (though not exclusively) on what the applicant had suffered, as opposed to what she had done. In short, she wished to erase from the record a part of her life story which was painful for her to recall. The application failed: there were weightier reasons for retaining those records, and in any event whatever her current wish to forget matters of such import, she might come to change her mind.

The applicant was described as having had a very difficult childhood, to which those records relate. It was not known who her father was. She had grown up to achieve impressive qualifications. Horner J described her as having “survived the most adverse conditions imaginable and triumphed through the force of her will. By any objective measurement she is a success”.

She wished to move on, and to have the records about her childhood expunged. The Trusts refused; their policy was to retain such information for a 75-year period. The applicant challenged this refusal on Article 8 ECHR grounds. Horner J readily agreed that the retention of such information interfered with her rights under Article 8, but dismissed her application on the grounds that the interference was justified.

The applicant had argued that (i) she did not intend to make any claim for ill-treatment or abuse while she was in care, (ii) she did not want to retrieve information about her life story, (iii) she did not want the records to be used to carry out checks on her, as persons who were not in care would not be burdened by such records in respect of their early lives, and (iv) she did not want others, including her own child, to be able to access these records.

In response to the applicant’s assertion that she did not want and did not envisage wanting access to her records, Horner J said this at paragraph 19:

“Even if the applicant does not want to know at present what is in her records, it does not follow that she may not want to find out in the future what they contain for all sorts of reasons. She may, following the birth of a grandchild, be interested in her personal history for that grandchild’s sake. She may want to find out about her genetic inheritance because she may discover, for example, that she, or her off-spring, is genetically predisposed to a certain illness whether mental or physical. She may want to know whether or not this has been passed down through her mother’s side or her father’s side. There may be other reasons about which it is unnecessary to speculate that will make her want to seek out her lost siblings. There are any number of reasons why she may change her mind in the future about accessing her care records. Of course, if the records are destroyed then the opportunity to consider them is lost forever.”

The Trusts argued that they needed to retain such records for the purposes of their own accountability, any background checks on the applicant or related individuals which may become necessary, for the purposes of (hypothetical) public interest issues such as inquiries, and for responding to subject access requests under the Data Protection Act 1998. Horner J observed that the “right for an individual to be able to establish details of his or her identity applies not just to the Looked After Child but also, inter alia, to that child’s offspring”.

In the circumstances, the application failed; the Trusts’ interference with the applicant’s Article 8 rights was justified.

Horner J added a short concluding observation about the DPA (paragraph 29):

“It is significant that no challenge has been made to the Trust’s storage of personal information of the applicant on the basis that such storage constitutes a breach of the Data Protection Act 1998. This act strengthens the safeguards under the 1984 Act which it replaced. The Act protects “personal data which is data relating to a living individual who can be identified from data whether taken alone or read with other information which is the possession (or is likely to come into possession) of the data controller: see 12-63 of Clayton and Tomlinson on The Law of Human Rights (2nd Edition). It will be noted that “personal” has been interpreted as almost meaning the same as “private”: see Durant v Financial Services Authority [2004] FSR 28 at paragraph [4].”

Robin Hopkins

One hundred years of solicitude

July 29th, 2013 by Robin Hopkins

In 2004, a man known as TD was arrested for an alleged sexual assault. He was interviewed twice. No further action was taken. The biometric data was in due course destroyed, as will be the case with others in such positions, thanks to provisions of the Protection of Freedoms Act 2012. But 40 pages of information about his arrest and the allegation are to be retained by the Metropolitan Police in the form of crime reports and a record shall be retained on the Police National Computer until 2104, when the claimant would be 128 years old. The Metropolitan Police’s policy (of August 2012) concerned Serious Specified Offences provides for retention of such information – without review – for a century. It contends that such long-term policing solicitude as regards these types of allegations is supported by research conducted by University College London in 2009.

TD sought judicial review of this retention to decision (i.e. the refusal to delete this information). Last week, in R (TD) v Commissioner of Police for the Metropolis and Secretary of State for the Home Department [2013] EWHC 2231 (Admin), Moses LJ and Burnett J dismissed his application.

The Court surveyed the relevant line of domestic and Strasbourg authorities which have abounded in recent years: R(L), R (C) and (J), S v UK, Catt, MM v UK (the majority of which are covered in Panopticon’s archive).

The Police said its policy will need to be reviewed, but that it was too early to say that the records about TD are of no use.

Moses LJ said this (paragraph 14):

“It is necessary to be cautious as to how far the considerations of the use to which the records may be put take the Commissioner.  Every record of an allegation of crime may be of use for the indefinite future, as the research to which the Commissioner refers demonstrates.  This was the very argument on which the United Kingdom Government relied in Strasbourg in S, relying on the “inestimable value” of the data [91].  But S shows that the fact that material is of potential use, and, certainly, of greater use than in Catt, is not dispositive.  Weighed against that there remains the discomfort or worse that any citizen must feel when the state retains personal information about him, particularly when it relates to an allegation, however unfounded, of a sexual nature.  In S, it was recognised that the mere storage and retention of the data amounted to an interference within the meaning of Article 8 (para 67).”

He concluded, however (and Burnett J agreed) that (paragraph 16):

“In my view, now that only nine years have elapsed and in the knowledge that access to the information is restricted to those who seek to investigate a crime it seems to me, like Richards LJ in J, that the Commissioner has demonstrated that the use to which the records of the allegation may be put justifies their retention, at least for the time being.”

The important qualifier was that the Police’s policy should provide for a review of the retention decision, but again, it was considered too early to order any such review in this case.

This will not be the last in this line of cases. The jurisprudential debate about balancing policing utility with the privacy rights of suspects – particularly concerning the question ‘how long is too long?’  – continues.

Robin Hopkins (@hopkinsrobin)