BAILII is celebrating 24 years of free online access to the law! Would you consider making a contribution?

No donation is too small. If every visitor before 31 December gives just £1, it will have a significant impact on BAILII's ability to continue providing free access to the law.
Thank you very much for your support!



BAILII [Home] [Databases] [World Law] [Multidatabase Search] [Help] [Feedback]

United Kingdom Journals


You are here: BAILII >> Databases >> United Kingdom Journals >> Privacy and Data Protection at the time of Facial Recognition: towards a new right to Digital Identity? | Monteleone | European Journal of Law and Technology
URL: http://www.bailii.org/uk/other/journals/EJLT/2012/03-3/168.html
Cite as: Privacy and Data Protection at the time of Facial Recognition: towards a new right to Digital Identity? , Monteleone , European Journal of Law and Technology

[New search] [Help]


Privacy and Data Protection at the time of Facial Recognition: towards a new right to Digital Identity?

Privacy and Data Protection at the time of Facial Recognition: towards a new right to Digital Identity? [1]

Shara Monteleone [2]

Cite as: Monteleone , S, 'Privacy and Data Protection at the time of Facial Recognition: towards a new right to Digital Identity?' European Journal of Law and Technology, Vol. 3 No. 3

Abstract

Globalisation and the development of information technologies promise to increase the quality of human communications and to foster general economic growth. What, however, is the impact of this digital evolution on fundamental rights, such as privacy or data protection (recently codified in the Lisbon Treaty) and digital identity (a possible new right in the future)?

The aim of this paper is to identify the main challenges and risks that these rights have to face within a global digital environment and, in the meantime, highlight the opportunities that human rights (HRs) protection could receive from digital technologies, if they are properly deployed. It will consider as case-studies some significant 'digital events' of the web.2.0 era, namely the recent developments in the functionality of social networking sites (SNS), e.g. Facebook's default privacy settings and the launch of its new facial recognition system: these examples will be used as starting points for a wider discussion on the evolution of digital rights and in particular of data protection rights.

Moreover, the SNS example will offer the occasion for a reflection on the users' online attitudes concerning data protection and identity management and for a consideration of the increasing use of networked technologies in the digital environment, often relying on human physical/behavioural attributes, which strain some of the main data protection requirements (e.g. consent).  These reflections will lead to canvassing the possibility of theorising a right to digital identity, as a comprehensive right for the citizen/user of the digital era, and of implementing it by means of a suitable legal-technical approach.

1. Introduction

As indicators of the state of 'health' of the users' rights in the digital world, the new Facebook's default privacy settings and its facial recognition functionality will be discussed in the first part of this paper. The Facebook case constitutes also an example of the use of special categories of personal data (biometrics) by a social networking site, having legal implications in terms of data protection, privacy and user's management of digital identity. Based in particular on the findings of the Eurobarometer (EB) on users' attitudes as regards personal data and identity protection), on recent Opinions of the Article 29 Working Party (Art29 WP) and on the recent Report of the UN Special Rapporteur, Frank La Rue, on freedom of expression (2011), the analysis contained in the second part of the paper will highlight some of the shortcomings of the current data protection legal framework that the recent Proposal of the European Commission for a Regulation on Data Protection seems only partially apt to solve. [3] [4]  [5]

It will be argued that legal measures of hard and soft law, at the international or regional level, should maintain a level of rigidity necessary to ensure the protection of rights against the opposing rights-infringing tendencies, acting in the digital world. At the same time, though, the legal instruments could be helpful and effective only if they are able to adapt to the new technologies and to the new attitudes of people towards them. This is likely to be one of the challenges that the announced new European Data Protection framework has to face.

The analysis will lead to the claim that, not only the current legal framework, but also the current legal approach needs to be reviewed in order to adapt to the incumbent socio-technological changes and to countering the new threats that address the individual in his 'digital identity' as a whole. After briefly discussing the concept of digital identity and its legal implications, the current idea of digital identity will be widened so as to cover a larger scope of management and control over user's activities online. It will be suggested that a codification of a 'right to digital identity' is desirable in order to help the users to enjoy fully the advantages of the information technologies while keeping control of their digital representation and, meanwhile, to facilitate the exercise of other fundamental rights, like freedom of expression.

2. Data Protection and Digital Identity at the time of the Social Networking

This part of the article looks at the challenges and opportunities coming from the use of digital technologies, considering in particular the new functionalities of social networking sites (SNS), for the 'new rights' of data protection (recently codified by the Lisbon Treaty) and digital identity (a possible 'new' right in the near future). The social networking of a large number of individuals, thanks to new mobile applications and Web 2.0, has been made easier, digitalised and monetised. As emerges from the Eurobarometer 359 (2011), more than a third of European citizens access SNS and more than one half of those also use websites to share pictures, videos, movies, etc. As the main use of SNS is to enable online socialising, this necessarily implies a disclosure of social (personal) information, and, although users consider 'social' information as personal, they are less cautious about sharing it. After a brief discussion of the world's most popular social networking site, Facebook, and of its new functionality ('tagging' suggestions based on the facial recognition feature), the focus will be on some of the main threats raised by its current management of personal data and digital identity.

Facebook's facial recognition functionality

Facebook (FB) has attracted much criticism in the last few years for its continuous, arbitrary changing of the default privacy settings of its members' accounts. [6] Some users started to become accustomed - if not indifferent - to these periodical changes, while others have manifested strong reactions. There are, however, different kinds of changes. Mark Zuckerberg's company, in fact, has faced strong opposition from privacy advocates to the launch of its facial recognition feature, firstly in the US (December 2010) and later in Europe (June 2011). [7]

This feature consists of 'tag suggestion', i.e. the automatic suggestion of people's names to tag in pictures without their permission. The novelty lies in particular in the fact that the feature uses facial recognition software to allow this automatic tagging: when a user posts a new photo on his FB page (for instance of a group of friends), the system matches automatically the faces in the photos with the names of his friends, therefore directly suggesting peoples' names on the basis of pictures in which they have already been tagged (Palmer M, 2011a). [8] The new function seems to pose many risks for users, as denounced by Article 29 Working Party (Palmer M, 2011b), the group of the EU data protection 'watchdogs'. The main point is that FB by arbitrarily changing the default privacy settings has left users with only the possibility of opting out of this automatic recognition (and only if and when they realise the change has occurred).

Several issues arise from the introduction of this FB function, related in particular to the automatic suggestions/tagging (vs user's autonomy to choose) and to the hidden changes in default setting (vs transparency and consent). As noticed by Art. 29 WP, in the 'Opinion on facial recognition' (2012):

'Registered users [of a SNS] must also have been given a further option as to whether or not they consent to their reference template to be enrolled into the identification database. Non-registered and registered users who have not consented to the processing will therefore not have their name automatically suggested for a tag […]' [9]

Main threats for users' personal data, privacy and identity

The facial recognition system just described raises a number of legal concerns due to its intrinsic characteristics: because it is an automatic tagging system, it does not take into account the consent of users; the feature is active by default on users' accounts; and it is based on biometric data. The case lends itself to be analysed under the connected but partially different perspectives of a) data protection, b) privacy and c) digital identity.

a)    The data protection perspective: the unlimited collection of biometric data

The main issues, under the data protection (DP) perspective, arise, essentially, as a consequence of the opt-out system and of the lack of information to users. FB started rolling out the feature without sufficiently notifying users, drawing attention back to two basic (but essential) concepts of the DP legal framework: the automatic collection of data and the user's consent.

The first problem, the opt-out system, is not new to DP and privacy advocates, and indicates that the user has not been asked to express her previous consent to data processing, but she can decide to withdraw consent afterwards; in theory, it is not always in contradiction with DP principles, but, as recently stated by Art. 29 WP (2011) [10], the opt-out choice adopted specifically in SNS for default privacy settings must be disapproved. This procedure of expecting to obtain consent based on the inaction or silence of individuals has an intrinsic ambiguity, as it cannot reveal the real will of the data subject. In its 'Opinion on facial recognition in online and mobile services' (2012), the Art. 29 WP stresses that, in the context of facial recognition technologies, consent for enrolment cannot be derived from the general user's acceptance of the overall terms and conditions of the underlying service:

'Users should be explicitly provided with the opportunity to provide their consent for this feature either during registration or at a later date'.  

The not-clicking (i.e. remaining passive) may not be considered as unambiguously consenting to data processing, at least not according to the current European legislation, namely article 7 of the DP Directive. [11]

The issue of the data subject's consent makes a meaningful comeback: according to the interpretation offered by Art. 29 WP, FB and other SNS need to ask for prior consent and to inform users about the changes in the default settings: non opt-out could be accepted. [12]  This entails at least two considerations: 1) the principle of privacy by default - which should be the rule, according to the European Commission (2010), EDPS (2011), Art. 29 WP (2009) - is not respected in the SNS's privacy default settings, and instead the opposite situation occurs, the 'Facebook's-decision-by-default'; 2) another issue, common to many SNS, is the fact that it is not easy for the user to see the changes introduced within the default settings, as often they are not made clearly visible and also because it is not necessary to access the default settings to use a social network (with the consequence that users could continue to be unaware of changes for a long time). This is an issue related to the information obligation and the transparency requirements for the data controller, but it is often not respected by many online companies which have increasing power over users' data and digital spaces and which tend to hide or at least to keep unclear the way in which they use them.

Art. 29 WP already in May 2010 had rebuked Facebook for continuously changing the default settings of its social-networking platform to the detriment of its users (Out-law.com, 2010). Though users were asked to confirm the changes (which would have made much more personal data available to more members than before if not to the Internet as a whole), many users simply confirm tout court the pre-selected options: this does not contrast with, but supports the idea of the default settings as important tools in personal data. Art. 29 WP addressed its concerns not only to FB but to 20 social networking companies. In particular the WP addressed the issue of third party applications, stating that SNS providers 'should grant users a maximum control about which profile data can be accessed by a third party application on [to be noticed] a case-by-case basis', i.e. not as a default setting.

However, the user's consent is not the panacea for any sort of data processing and identity management: obtaining consent, in fact, as stressed by the Art. 29 WP (2011), does not exempt the controller/ISP from his obligations (ex article 6 of the DP Directive: fairness, necessity, proportionality, data quality); in particular, except for limited cases, consent does not legitimise activities that would otherwise be prohibited (for instance, concerning sensitive data in some countries), nor does it permit the use of data for further purposes (except if based on legal grounds, as stated in Directive 95/46/EC article 7 and 8(2) and in the new Proposal for a General Data Protection Regulation, article 6 on 'Lawfulness of processing' and 7 on 'Conditions of consent').

With the introduction of this facial recognition system, it is not only the arbitrary change of default settings and the opt-out system itself which has triggered the European scrutiny of Facebook, but rather its underlying purposes and the collection of biometric data related to it.  Many European data protection regulators are, in fact, concerned that this automatic system is aimed at collecting biometric data, as declared by Lommel, a member of the Art. 29 WP, in an interview by Businessweek (Bodoni, S, 2011). For reasons of space, the topic of biometric data cannot be properly covered in this paper, but some aspects, given their relevance for the theme addressed here, should be mentioned:  the conditions for processing biometrics in compliance with the fundamental right to privacy (for instance, ex Art. 8 European Convention of Human Rights and Art. 7 of the EU Charter of Fundamental Rights, as discussed in this paper under section 'b') and the protection of the 'right to be forgotten', as a way to make the consent more effective, so more related to informational privacy, and thus to data protection. [13]

Regarding the right to be forgotten in its online dimension, it is intended as the right of individuals 'to have their data no longer processed and deleted when they are no longer needed for legitimate purposes'. [14]  In Europe it receives legal acknowledgement: so far as derived from the common data protection rules and, recently, as embodied in an explicit norm of the Proposal for a General Data Protection Regulation (Art. 17). [15] In contrast in the US there is not an automatic right to demand a company to erase personal information. According to the current European DP Directive, the subject can withdraw consent to the storage of his personal data by companies, unless there is a legitimate reason to store the data. This should be the rule, a fortiori, for biometric data because their innate characteristics (universal, unique, personal) give rise to major concerns for the user's rights protection (De Vries, K, De Hert, P and Bellanova, R, 2010).

At the moment this is not the case in the U.S. It can be noted that Europe has its specific rules regarding the applicable law also in the case of data processed by a third-country service provider with an establishment and activities in EU (Art. 29 WP, 2010a) and that a specific norm has been introduced in the recent Proposal for a General DP Regulation (article 3), stating that EU law applies to the processing of data in the context of the activities of an establishment of a controller in the Union and this, regardless of whether the processing itself takes place within the Union or not. However, concerns still remain also in Europe, since the main decisions on and the activation of new invasive techniques (like facial recognition) are taken at the headquarters of a third country company.

Therefore, with regard to Facebook's facial recognition feature, two observations can be made about consent: 1) biometrics started to be used without users' unambiguous consent; in fact, the user who uploaded a picture (or whose picture was upload by others) and did not opt out of using the facial recognition function maybe did not even know/want his picture to be viewed or to be automatically tagged by the system); 2) even if the data is used according to the user's will, his right to be forgotten is not ensured, in the event that he changes his mind, at least in the U.S. where companies are not (yet) required to respect it. [16] The economic relevance of users' data for companies like Facebook is not only evident in the recent Facebook IPO (early 2012), but also in the reactions expressed by many Internet companies (in U.S. as well as in other countries) to the announcement by the European Commission of the future codification of the right to be forgotten through the adoption of the proposed General DP Regulation (Batchelor, C, 2012).

Supporting positions of the adoption of a right to be forgotten also in the U.S. are not missing; Falkenrath (2012), for instance, pointed out the urgent need for a right to be forgotten especially after the new privacy policy released by Google at the beginning of 2012. Accordingly, this policy would indicate Google's intention to maintain digital files on its users indefinitely, to identifying its users across all its services and to integrate this data across all Google's services - web search, Gmail, Picasa, Youtube, Earth etc., regardless of the users' consent:

'Individuals users should have a legal right to be forgotten that supersedes whatever authorizations they granted when they created their accounts […]Importantly, the power, as well as the privacy risk, of big data comes principally from the metadata -the information on where, when and by whom the data were created. Google, like other companies, declares to protect consumers' content, but 'treats metadata as business records it can retain and manipulate indefinitely'. Therefore, '[t]he distinction between user content and service provider's metadata is complex but central to whether a right to be forgotten will have teeth.' [17]

Facebook under judicial control

Despite the lack of general, organic data protection and privacy legislation in the U.S., an active role in respect of users' digital rights is played by civil society associations and privacy groups (e.g., the Electronic Privacy Information Center (EPIC), Privacy International, etc).  EPIC, in particular, in June 2011 filed a complaint to the FTC over this new facial recognition system, urging the Federal Trade Commission (the US consumer regulator) to investigate FB, to determine whether the company has in fact engaged in unfair trade practices and to require FB to give users meaningful control over personal information. [18]

In Europe many DP regulators strongly contested the FB's initiative, especially the facial recognition feature. The firmest condemnation came from Johannes Caspar of the Hamburg state data protection authority, who suspected that the real purpose of the company, behind the declared advantages for users to have their 'online experience improved', was the unlimited retention of biometric data that is unlawful according to European data protection principles. [19] The Hamburg commissioner's inquiry was thus based on the claim that 'the social networking giant is illegally compiling a huge database of members' photos without their consent'. [20] [21]

However, there are other legal issues to be considered. Serious risks of the continuous tracking and surveillance of users arise in particular when data breaches or hacking attacks occur. Though the access to this data could be restricted to a limited number of people, it is not difficult to imagine that, if the data is brought out of the system, risks for the users' privacy (and related rights, including free expression on the Internet) would be high (Rouvroy, A, 2010).  Regardless of whether the facial biometrics are accessed lawfully or not, the diffusion of this new feature will, however, tremendously increase the traceability of users by their unique biometrics on the Internet and anywhere else, whether for commercial purposes (e.g. smart digital billboards for targeted advertisement), or for law enforcement aims (Singer N, 2011). The issue is even more complicated by the fact that, using facial recognition software to make photo-tagging easier, the feature could also turn out to be processing of sensitive data, enabling the detection of data related to the ethnic, religious or health status (real or alleged) of the individual who is behind the FB profile: the European regulation is quite strict on the processing of this category of data (the current DP directive provides for special rules concerning the processing of sensitive data at article 8), but the nature of sensitive data that could be processed through this function is not so easy to recognise and thus to report.

Another, connected aspect of data protection related to the FB facial recognition function is the problem of the disclosure of biometric data to third parties (and, though the issues are different in terms of liability, the risk of unauthorised access by third parties): i.e., partners of FB, marketing companies, but also law enforcement agencies, for purposes that are different from those that have caused a FB member to upload a picture. The criticism of FB, in fact, is also due to the announced intention by the company to make visible more user data (users' preferences, Like and other activities) to its business partners. The FB's argument is that the changes are a mere simplification in the data-sharing - 'social plug-ins to improve people's online experience' - but, as denounced by digital rights groups, such as the Electronic Frontier Foundation and EPIC, in this way users are pushed to share even more data than before, reducing the amount of control over their personal data and over their identities, and contradicting FB's own representations.

The impression is that instead of going towards a user-centric system of data control and identity management, this kind of social networking site is more projected towards a FB-centric system and that the changes in many cases represent unfair business practices. [22] It is probably for this reason that the Art. 29 WP has emphasised the need that a default setting of a social network should limit access to a user's profile data to self-selected contacts, with the consequence that further access -like that of search engines, should be an explicit choice of the user (intended as an opt-in choice). Otherwise the default privacy settings, instead of protecting users' private information (as the word would suggest), would have a boomerang effect, unlimitedly exposing the user, causing digital profiles and identities to be at the mercy of Internet companies (or intruders, such as identity thieves). The situation seems to be even more worrying given the recent acquisition (June 2012) by Facebook of Face.com, an Israeli facial recognition group, which will provide the SNS with a sophisticated technology to boost the existing lead in photo-sharing and automatically identifying people from pictures uploaded on FB site (Bradshaw, T, 2012).

The Like button

A powerful mechanism used by SNS like Facebook or Twitter to track users during their online activities (e.g. websites visits, blog posts, etc.) is the Like button, a technique that, through a direct plug-in to the FB platform, offers the opportunity to the readers to share articles and other kinds of content with their FB 'friends'. The technique started to loom on many websites also in Europe, receiving much criticism by privacy advocates for the continuous tracking of users, often aimed at creating detailed profiles for several purposes. [23] In particular, problems seem to derive from the use of cookies as part of this increasingly ubiquitous 'Like' button that allows tracking even those users who are not members of the social network. It should be noted that this feature (adding user-tracking cookies to browsers) has become a central part of FB's appeal to companies, enabling them even to send messages directly to users who have clicked a Like button on their pages. [24]

Germany can boast also in this regard another inflexible privacy watchdog, the Data Protection Authority of Schleswig-Holstein, which banned theLike functionality (August 2011), from the institutional websites operating in the territory of the district. [25] This Authority received responses from some users and bloggers, who saw in the measure an attempt to 'censor' freedom of information and freedom of expression. This should not be a surprise if it is considered that through the Like function, users share with friends content they want to discuss; it seems that most information is found by users on social networks and blogs rather than clicking on giant search engines. On a closer look, however, the restrictive decision of the German authority seems instead to try to preserve this freedom of expression in the SNS users as well as their autonomy of choice over the Internet: the decisional autonomy that is necessary for the users/citizens to exert their civil and political rights in a democratic society and which clashes with being continuously tracked by companies, [26] and with being the object of behavioural constraints and influence [27] (e.g., being suggested to buy specific products or being induced to express or not one's own opinions).

At the end of the day, the aim of the restrictive decision adopted by the Schleswig/Holstein authority would be protecting nothing less than the free building and online manifestation(s) of one's own personality, of one's own preferences, as well as of one's possible diverging opinions or non-mainstream thoughts: manifold attributes of one's personality that an individual may wish to manifest differently, according to the context and to the interlocutors, without being tracked, observed and influenced to do something. The aim would be, in other words, protecting one's own digital identity, that, as discussed below, needs to be recognised and ensured as a right: [28] protecting digital identity in order to enhance the user's empowerment and to protect other rights, freedom of expression in primis.

Data proliferation: disclosing data on Social Networking Sites

Another aspect that should be mentioned with regard to the extensive use of SNS is that of data proliferation, linked to the increasing trend to share information and the more or less voluntary disclosure of personal data (Eurobarometer, 2011). Up against the success of SNS and sharing sites or shopping sites and the voluntary disclosure of personal information, the principle of 'data minimisation', that has for years been one of the backbones of the European approach to DP, shows its limitations, if not adequately integrated/substituted by mechanisms that enhance the user's control over her data and digital identity. [29]Users assist in data proliferation and data economy, where personal information is a valuable currency (Acquisti A, Gritzalis S, Lambrinoudakis C, di Vicemercati S, 2008). In this context there is still the need for proper obligations to be imposed on companies, [30] which collect data for their own purposes, to limit their use to what it is strictly necessary. As Leenes (2008) notes, a distinction should be made between those situations in which users voluntarily disclose personal data in order to buy an item or pay a bill and those in which users are not at all aware of the data (web pages visited, etc.) left behind them regarding their online behaviour, and collected by companies through identifiers like cookies.

As emerged in U.S. on the occasion of the model introduced as 'Do not track kids Act', the same market could be advantaged if companies differentiate themselves by offering different choices for users, for example, about the automatic deletion of their data: some users would do this every month, whilst others may want to keep the data available forever (Vega, T, 2011). In this regard, Bernal (2011) sustains the need for a change in business models in order to obtain a reduction of data that companies hold.  A higher level of user control should be sought as well, for instance, through well defined privacy options offered by the same companies/service providers. However, given that data-sharing and the use of SNS is growing in particular by young generations, the focus of legal and technical measures should be on ensuring more transparency and better user access to automatic mechanisms of control over their data and to an eraser button, which is still exceptional on popular websites. [31] It seems that even the recent overhaul of FB's privacy options, which will enable users to modify their privacy levels more easily, has been introduced more for economic/competition reasons (to bring FB into line with features on Google's new rival service, Google Plus) than by legal/compliance ones (Bradshaw T, 2011).

It could also be argued that this example would support the theory according to which the best solution to enhance privacy and data protection would be leaving market forces free to determine the rules, an idea which is especially prominent in the utilitarian approach to data protection of North American jurisprudence. [32] Several reasons, instead, prevent us arriving at this conclusion. It is one thing is to recognise the importance for the market and economic growth of differentiated offers regarding privacy preferences, as well as self-regulation initiatives or to acknowledge the advantages for the users in terms of useful suggestions on products/services and of an enriched online experience. [33] But it is another is to leave the user completely at the mercy of big online companies and their power to determine what value this or that piece of data currently has and what decisions to take accordingly. [34]

The European model of public guardianship, intervention and sanctions (at the national or supranational level) to protect privacy and data protection (a strongly regulated-market) - from which to draw the concept of the data subject's consent as authorisation/permission instead of a contract - is still to be considered, as a general model, the preferable one available and its fundamental principles still valid (European Commission, 2010), though, as argued later, the European model needs the updates urged by the developments of our digital age.  The new approach adopted recently by the FTC and mentioned above seems also to go towards the direction of the European model. So, it could even be sustained that it is not so much that the rights could benefit more from a marked-oriented approach of privacy rights (latu senso), but vice versa: the market receives positive feedback and input from rights-protecting corporate practices (European Parliament, 2011).

However, the European legislation also shows its limits, for instance, with regard to the personal data that is collected unobtrusively (on the basis of the tracks left by the users during their online activities): often users are encouraged by a company to download software products that are then used by the company to track Internet users and create profiles of them for targeted advertising purposes. More importantly, it shows its limits also as regards information which is not 'personal', according to the traditional definition of the DP Directive (for instance, because it is anonymised). [35] Even the processing of data that does not permit the specific identification of an individual (i.e. that does not allow, even indirectly, to draw/obtain the real name or address), could turn out to be even more invasive of individual privacy and autonomy (Leenes R E, 2008) and in particular of an individual digital and real identity (as discussed below). What public and private entities care more is not who we are, but how we behave: they monitor and observe partial aspects of our behaviour (preferences, attitudes, habits) in order to create profiles and fit us into categories of subjects (as consumers, electors, etc.) that we cannot control or we are not aware of. As indicated by Hildebrandt,  'the threats of profiling to our autonomy derive from not knowing how our data will form patterns that could disclose future behaviours, inclinations, health or other risks[…]Profiling enables the construction of unexpected knowledge'. [36]

b) The Human Rights perspective:  from the right to privacy to the right to a digital identity

Considering the use of biometrics in FB's new functionality more under the HR perspective of privacy and data protection rights, mention should be made of Art. 8 of ECHR and Art. 7 and 8 the European Charter of Fundamental Rights. As stressed elsewhere (Monteleone, S. 2011), one of the main consequence of the entry into force of the Lisbon Treaty for data protection is the binding strength as primary law of the Charter (and of its Art. 7 and 8). The horizontal effect of the Charter regarding all EU law and the new role of the European Parliament should contribute to ensuring that the 'fundamental right' to data protection is respected and implemented in the different activities to which EU law applies, including those in which the deployment of biometrics is increasing (e.g., public security) and in which relevant exceptions to the general rules on data protection are granted.

Relevant statements regarding the collection and use of biometrics have been made by the European Court of Human Rights (ECtHR), in the Marper case (S. and Marper v UK [2008] ECHR 1581). Though the context and finality of the data processing in the Marper case (public security, police investigation) were different from those of the SNS analysed here, the principles affirmed in that occasion by the ECtHR could also apply to FB's facial recognition. The rationale that grounded the Court's decision was to avoid the risks for privacy and data protection triggered by the unlimited storage of data (data of a special nature capable to identify uniquely a person). It can be argued that the same rationale is still valid, a fortiori, in the context of biometrics and facial recognition functionality deployed by a SNS. The unlimited and unchecked storage of this special category of data raises even more concerns for privacy rights if one just thinks about the higher potential in terms of precise personal identification and data dissemination offered by the social networking system and its possible links with search engines. As argued by Art. 29 WP (2012b):

'Biometric data, by their very nature, are directly linked to an individual. That is not always an asset but implies several drawbacks. For instance equipping video surveillance systems and smartphones with facial recognition systems based on social network databases could put an end to anonymity and untraced movement of individuals.' [37]

Still on the privacy right perspective, the Report of the UN Special Rapporteur, Frank La Rue (2011), on the promotion and protection of the right to freedom of opinion and expression, deserves a closer look, in particular, for the statements on the relationship between privacy and data protection rights, on the one side, and other rights, like freedom of expression, on the other.

The positive interplay between privacy and other rights

An interesting point of the UN Special Rapporteur's Report is the consideration on the instrumental nature of the right to privacy/data protection in relation to other rights: in particular the positive interplay between privacy and freedom of expression. Although these rights in several contexts can clearly manifest themselves as opposing interests, the two rights are not necessarily antinomies, conflicting rights among which a balance should be struck (Solove, D, 2006). The inadequate protection of data protection and privacy is enumerated in La Rue's Report as one of the obstacles to freedom of expression online. In particular, the (1) lack of anonymity and the (2) States' or companies' ability to access information of users are taken into account as factors that undermine, not only privacy rights, but also people's confidence on the Internet, impeding the free flow of information and of ideas online. [38] The underlying claim is that the unchecked access by government or companies to what users read and share risks instilling the fear of criticism, the fear of reading what is unpopular or what the power (public or private) dislikes (Electronic Frontier Fondation, 2011): in other words, it would have the effect to chill, restrain the right to seek and receive information.

The socio-political context and reasons that the Special Rapporteur seems to refer to have clearly little to do with situations such as 'gossiping, shaming and rumor-mongering', that require a thorough distinction between 'good speech and bad speech' and a case by case balancing activity by judges between privacy and freedom of expression (Solove, D, 2007). However, as stressed by Solove (2007), even in these cases 'Protecting privacy can actually advance the reasons why we protect free speech in the first place […]'.

As a general rule, restrictions to the right to privacy should always be in compliance with the HR framework, i.e. the restrictive measures should be taken by states on the basis of an ad hoc authority's decision and in respect of the proportionality and necessity principles. [39] [40] More importantly, from Article 17 ICCPR, according to La Rue, it is possible to identify the positive obligations for the States to take legal measures in order to ensure privacy and data protection. Those obligations are meant to include laws that clearly guarantee the users' right to ascertain, in an intelligible way, if data is stored in automatic data-files and which public authorities/private bodies control their files. [41]

In particular, the Special Rapporteur, on the ground of the HR framework, calls upon the States to ensure to citizens a 'right-to-express-themselves-anonymously' (hyphens added): a meaningful wording that includes the essential elements of both data protection/privacy rights and freedom of expression. Though the relevance of an adequate protection of privacy for the full enjoyment of freedom of expression on Internet is quite well acknowledged (Rouvroy A, 2008; UN Special Rapporteur, Martin Scheinin, 2009), the idea that online anonymity should be ensured as a tool to protect at the same time freedom of expression and privacy/identity rights is not commonly shared. Relevant divergences among courts and legislations exist on that, and this could constitute good reason to recognise the importance of the Special Rapporteur's statements, when calls upon the States to ensure to citizens a 'right-to-express-themselves-anonymously'. This right should be acknowledged to all citizens/users independently from their activities. [42]

This 'right-to-express-themselves-anonymously' has, in fact, a direct effect on current issues, such as those related to the use of SNS. As emphasised by the Rapporteur, this right implies that in the registration systems of these social platforms no real names should be asked. The facial recognition feature, with its automatic tagging, goes clearly in an opposite direction.

Though the dangers that can result in some contexts from anonymity and the need in these cases to restrict the privacy right in favour of other prevalent rights (public security, right to justice, etc.) cannot be ignored, it must also be noted that the 'right-to-express-themselves-anonymously' is not automatically in contrast with the necessity to avoid these dangers, especially if proper legal and technical measures are adopted (Zarsky T Z, 2004). In this regard, mention should be made, for instance, to the principle of 'reversible anonymity' as suggested by Poullet and Dinant (2007): accordingly, 'pseudo identities' can be allocated to individuals by specialist service providers who may be required to reveal a user's real identity, but only in circumstances and following procedures clearly laid down in law.

Solove's suggestion is not so different:

'One way to strike a balance between anonymity and accountability is to enforce 'traceable anonymity': in this way we preserve the right for people to speak anonymously, but in the event that one causes harm to another, we've preserved a way to trace who the culprit is. A harmed individual can get a court order to obtain the identity of an anonymous speaker only after demonstrating genuine harm.' [43]

In the SNS context, a typical paradox could be noticed, that of the desire of sharing one's moment of private life, the voluntary disclosure of personal data, and the claim for privacy/personal data right protection; however, there is no paradox in someone's will to have digital social relationships and expecting to have his choices (regarding the use of information on some aspects of his online private spheres) respected. In other words, even for those users who want to keep and share their data online (also for the long term) and even for those who wish to be tagged, a fair use and due representation of the partial aspects of their own personality should be ensured. [44]

As R Leenes stresses, it is not 'an all or nothing issue' (Leenes R E, 2008): we decide to share personal information and pictures, but not with anybody, anywhere, anytime. In other words, even for those who do want to share information, and keep data or pictures in their online SNS pages for long time, the protection of their data and of their (partial) identities should be ensured. Interpreting this 'right-to-express-oneself-anonymously' widely, it could be possible to see also some basis for a new right with regard to digital identity management: the latter implies the extensive use of credentials (user names, passwords) for a user's identification and/or authentication in order to access online services, that should, however, be suitably processed in order to ensure the respect of one's data protection and one's digital identity as defined above, and, through that, the user's freedom of expression. [45]

Considerations of digital identity are obviously not contemplated in the aforementioned UN Report. Although the statements contained in it are relevant, in fact, they are limited considering only 'personal data' and the cases in which people are specifically identified (Special Rapporteur La Rue, 2011, p. 53). The coverage of identity management by the Report inevitably coincides with the coverage provided by the DP and privacy legislation currently available, i.e. restricted, if other issues related to digital identity are considered: the issues, for instance, of the risks deriving from the processing of non-'personal data', or from the non-distributive group profiling, or from the use of unique identifiers (as in the case of biometrics and facial recognition) across multiple services providers, or, finally, the freedom of expression-related issues regarding anonymity (de Hert, 2007).

One can sustain that, though the reaction to the FB initiative from civil society and European regulators testifies to the general usefulness of privacy/DP principles, it should be noted that with regard to the online flow of data (and the affirmation of digital identities) these rules seem to work mainly as a posteriori measures, at the stage of punishment of unlawful processing of data and less as user-control. The two aspects mentioned above (privacy by default not respected; lack of transparency) constitute remaining legal issues to be solved with regard to digital technologies and their applications: that is, ensuring suitable transparency mechanisms (Hildebrandt M, 2010) and guaranteeing to the users effective control tools over their identities.

Moreover, as stressed by Olsen and Mahler (2007), DP and privacy legislation was not conceived to deal with the specific and more complex issues related to identity management, necessary and inevitable in the access to and usage of today's online world. The risk of reducing users' privacy and data protection could, in fact, potentially derive from Identity Management itself (IdM). In brief, IdM systems seek to simplify and make more efficient access to and fruition of online services, offered by different Internet organisations, in particular, by reducing the need to use different usernames and passwords for each authentication and identification required to obtain a service, and making 'digital identities' transferable across service providers and identity providers.

While portability of digital identities is good in terms of economic efficiency as it improves online transactions, it raises meanwhile concerns in terms of privacy (Tene O, 2011) and of identity. [46] In order to realize a suitable online identity portability across services and websites, a standardization of privacy/identity policies should be ensured (if necessary, by law): this would not only increase the user' autonomy on her identity data (thanks to data portability), but also a more harmonized level of security among the different websites in which a user wish to migrate. As noted by de Hert (2007), the current legal instruments do not cover all the aspects of personal identity, understood, as Hildebrandt (2007) suggests, in dynamic terms, necessitating a mix of negative and positive freedoms to reconstruct one's identity in the course of time. [47] De Hert stresses that a new right to identity should be distinguished clearly from the classical (shielding) human rights, such as privacy: in fact, it is not only an issue of shielding persons against intrusions (negative aspect) but also an issue of making identity formation possible. [48]

c) The digital identity right perspective

The limitations observed in the current legal instruments of privacy and data protection raise the question of whether it is time to recognise and codify an identity right (de Hert, 2007)or, better, a digital identity right (within or beyond the HR framework). A prominent recent study on the concept of the digital identity right and its difference to the right to privacy is that of (Sullivan, C, 2011), who considers for the first time, the concept of 'transactional identity': the subset of prescribed information (names, date of birth, photograph and biometrics) which are required of an individual for a transaction (with a government agency or a private entity) under an identity scheme (such as the national identity scheme where individual information is matched with a national register). Accordingly, the transactional identity has functions that go beyond just identification of the individual and that give it a 'distinctive legal character'. The discovery of this concept of identity prompted, in Sullivan's view, the rise of questions regarding the existence of a right to identity. She considers the right to identity as relating to individual autonomy but in a different sense of privacy, as the right to be recognised and transact as a unique individual. She stresses the emergence of a new legal concept of digital identity that 'changes the way in which an individual is recognised and how transactions are conducted'.

The discussion in Sullivan's work, however, focuses on digital identity in the context of national identity schemes - e.g., UK and Australian national identity cards. This paper, focusing on the right to digital identity on Internet and in particular on SNS, and on the partially opposing view of the need for the user to act not necessarily as a unique individual, but also as several personae, according to the online contexts and people, attempts to add a new perspective to the (few) existing studies on a right to digital identity. Often digital identity protection is associated almost exclusively with identity theft (i.e., obtaining information to commit a fraud, linked usually to financial issues), but digital identity is something more and different. It does not coincide necessary with privacy and data protection (i.e. identity protection may be not ensured even though privacy and data protection rules are respected) [49] and, as sustained in this paper, it should be conceived more as the right to the free and untracked building and representation of one's own personality - as a 'digital persona'.

As noted above, the current framework does not seem to take into account the dangerous effects deriving from (non-distributive) group profiling applied by SNS and those that are not based on 'personal data' (in the sense of the directive). The (mis-)use of anonymised data can also have negative effects on individual identity construction. Identifying people with their precise names or addresses is becoming unnecessary, since in order to create a profile and to provide customised services, it may be sufficient to know, in some cases, only the identification number of their computer device, while in others, simply the categories to which a person is likely to belong. (Wright D, 2009).

The current legal regime in particular does not consider indirect profiling techniques and non-distributive group profiling. [50] The latter type of group profiling is a technique that claims to assign specific characteristics or behavioural patterns to different individuals, inferred not on the basis of the individual's specific personal data but on the probability that the individual belongs to the same profiling group (thus on the probability that that individual will share the same attributes of the group, even though this may not be the case? [51]  These considerations appear to be particularly meaningful when we come back to analyse FB's facial recognition functionality from the perspective of a user's digital identity and the various possibilities that this service could be linked with current or future profiling techniques. [52] Without doubt, in fact, the case of FB's facial recognition deals with users' identity, as the objective of the technology is explicitly that of identifying people in photos uploaded to the FB website. Digital identity is at stake also because the construction of one's FB profile and the way in which users express themselves contribute to defining and developing one's digital identity. [53] Moreover, technically speaking, FB functionality is concerned with 'digital identity management' (Tene O, 2011).

In a broad sense, the risks for identity could be identified, firstly, in the possible (right or wrong) connection of the facial recognition functionality with the use by online companies of digital profiles - derived from the observation of users' online activities and of the several representations of users' identity in the disparate contexts (marketing, targeted products, public services, etc). [54] As emphasised by Leenes (2008), the problem of profiling is not in the connected provision of personalised services in itself, but in the gradual loss of control over one's digital identity, constructed, imposed, by ISP. Profiling and especially non-distributive group profiling, may easily lead to the misrepresentation of an individual, on the basis of which decisions regarding the treatment of the user are taken. For instance, behavioural targeting, the emerging and successful business model for online companies in this case would be enriched by the (right or wrong) association with a digital face. [55] The decisions that can have an impact on the user, however, could relate to more relevant issues, such as allowing or denying the user access to a service/product or to display this or that kind of information, being the object of discrimination or being included or not in the database of a law enforcement authority. [56]

A major risk of the (mis-)use of users' online identity is the breach of the right to non discrimination: discrimination that, in the case of facial recognition, would be based not only on the users' web preferences and online behaviour, but also on his biometric characteristics. [57] To offer an example of a risky context, it would suffice to think that today, more and more SNS are already consulted by employers in order to obtain information regarding a possible candidate, or by the police within a criminal investigation. [58] In other words, as our daily activities are increasingly online, decisions regarding the users and based on online profiling could go from an (apparently) inoffensive price or service discrimination to a more serious appraisal (e.g. in the employment or criminal investigation contexts).

Individual autonomy in making choices is also at stake in this context: one can think about the adverts displayed on the web pages visited by a user and that are more often based on her 'formal persona', i.e., on the digital identity imposed by the Internet providers for their purposes/interests. As observed by Leenes (2008): 'Users are unaware of what information caused the service provider to make certain decisions regarding the content that they get to see. The user, in other words, is unaware of what persona imposed upon him/her looks like'. Furthermore, the content of what we are looking on a computer can be influenced by the user profile constructed by the ISP, which is often a commercial enterprise: search engines, for instance, provide free information services to users, counting on advertisements: therefore they have a commercial interest in knowing as much information as possible about its users (including their biometrics), in order to improve the targeting of an advertised product and this can have an impact on the information retrieval and information provision (Evans, D, 2009). In addition, such a facial recognition feature seems to threaten a relevant aspect of an individual-user' existence i.e. her digital identity as the 'individual capability to maintain different (public) images, essential in our social life', according to the idea of audience segregation applied to identity in the online world (Leenes, RE, 2008).

There is another aspect to consider, in part related to this concept of identity: the chilling effect that the awareness of this facial recognition feature could exert on the freedom of individual to express him/herself online, i.e., the restraining effect on freedom of expression. Building upon UN Rapporteur La Rue's conclusions, but trying to going a bit further, it is possible to argue that not only is data protection needed to ensure freedom of expression, but so, above all, is identity protection.

The introduction of the 'identity by default' principle could be asserted, covering situations that were not even imagined at the time of the Directive's adoption and also cases such as 'non-personal' data processing or cases of the wide voluntary disclosure of data, i.e., that of the majority of SNS users' data: the data minimization principle does not fit in very well with their attitudes. [59]

Regarding non-personal data, it should be noted that Internet activities make increasing use of 'digital personae' that are not necessarily linked to 'real personae' - identifiable with a precise name and physical address: what an ISP is more interested in is users' preferences regarding products/services, their activities and trends more than their real identity (so that they do not need to identify specifically the real person by her family name or physical address). The problem is that the link between the information on users' behaviour online (tracking of websites visits) and the imposed digital construction of users' identity (e.g. non-distributive group profiling), though it may not entail legally speaking a personal data processing, it could be used in a way that, for the reasons mentioned above, has serious effects on the digital identity of an individual. The problem with profiling (especially automatic application of profiles by machines) regards the subliminal influences on the process of 'identity building' (Hildebrandt 2012)

Therefore a right to identity by default, would mean that a right to respect one's own identity (-ies) - in the sense of a digital persona created and chosen by the user and not unobtrusively imposed by the ISP - should be guaranteed: a right that the user could claim in the case of (mis-) representation of her identity in manners and in contexts that do not correspond to those she intended. It is interesting, then, what FB declares in the aftermath of the German regulator's decision about the convenience of the facial recognition/tag suggestion feature, which would 'make it easier and safer for users to manage their online identities'. Looking at the new FB feature as an eID management system, could it be possible to see it as a 'safer' eID management system for the individual? [60]

3. Digital Identity: towards a 'new' right?

In the following section some of the main issues for privacy and data protection related to the management of electronic (or digital) identity (e-ID) will be briefly recalled. Although the existing legal regime provides some recourse for users, its limited impact on new digital technologies (as discussed above) and on their use by powerful online actors make the parallel development of a more comprehensive approach of the utmost urgency, leading to the acknowledgment of a right to a 'digital identity', appropriate to the digital age: a legal instrument able to calibrate both the normativity of technology and the normativity of law, preserving, meanwhile, the rule of law (Hildebrandt M, 2010).

Providing identity data when accessing an Internet service has become a common activity for people, whatever their purpose (work, social activities, health or leisure): people's identity data is converted into credentials for accessing online services (Stevens, T, Elliot, J, Hoikkanen, A, Maghiros, I and Lusoli W, 2010).

The interoperability and portability of e-ID credentials (the use of the same e-ID across several online operations and service providers) are considered essential for the development of future e-ID systems enabling the digital economy (Stevens, T, et al. 2010): in particular it seems that the development of eID will be a key driver for the European economy. [61] For this to become true, trusted, secure, interoperable eID technologies and authentication services are needed for daily online activities (transactions or access to public/private services). [62]

In other words, European users need to trust, to be confident in e-ID management systems, to be reassured as to the possible connected risks in terms of personal data protection and of digital identity protection (European Commission, 2010a). [63] This could occur if a sound, clear framework regarding, not only the processing of personal data, but a more comprehensive user's e-identity management system is put in place. This should embrace 'strong' e-IDs (identity-related data issued through sophisticated technical procedures, such as those employed in transactional service) but also the 'soft' e-IDs, such as the basic credentials (e.g. a password) used in SNS, blogs or other user-generated content sites, as well as all the related partial attributes (data about user's preferences, location, online behaviour and, as seen in the FB case, facial characteristics) that contribute to create a user' profile (Zarsky T, 2008).

It must be noted that the current data protection regulation encounters major problems especially with regard to multi-organisation identity management (Olsen T and Mahler T, 2007). [64] An appropriate interface should include, for instance, an appropriate legal notice, containing information on how the data will be used and should allow the user to provide aware and unambiguous consent, [65] and that seems to be missing in the FB case. However, the obligation to inform the user, deriving from this interaction between the user and the collaborators in the identity management, could be difficult in a context of multi-organisational settings: e.g., should the information be provided by each of the collaborative organisations or would one be enough? [66]

On this aspect, the design of what we can call 'identity settings' will be of paramount relevance, and should be explicitly acknowledged by the legal framework (though the technical details could be determined in the phase of implementation). A set of 'conformed designed' technologies (i.e. technologies designed from the outset to incorporate due, modern legal norms) is thus desirable, realising that combination of legal-technical solutions that some scholars have put forward for years (Y Poullet, 2005; A Murray, 2006; 2010), taking into account the recent technological developments and the wider perspective of identity protection. [67]

What is proposed here is a combination (better: an integration) of both legal and technical solutions, such as 'Do not track' mechanisms or a compulsory 'access-to-metadata button', or  'erasure button' that would enhance the user empowerment as regards her digital identity as a whole. [68] [69]

Taking into account the FB case and the users' manifested need for more transparency, the focus should be less on data minimisation, and more on adequate transparency as well as on user-centric identity mechanisms: however, the relevance of minimum disclosure of data should not be excluded as such, but it should work more as a general principle, that new technical tools for the protection of users' rights should be compliant with, unless and until a clear opposite will of the users is manifested. Although a right to transparency is already inferred by the current DP and privacy framework, it is often frustrated by the difficulty to put it into practice. As in the case of FB's recurrent changes, users may be unaware of the kind of data used. As stressed by Wong (2010),

'[u]sers should be aware of the ongoing perils associated with using social networking sites so that social networking remains an enjoyable and useful activity while unauthorized impersonations of life become a welcome thing of the past.' [70]

More attention for users' identity as a whole, and not only for their personal data, should lead to the recognition of a new right to digital identity supported by well-pondered accountability rules and due obligations for those actors that, in the digital environment, have more economic and decisional powers regarding the IdM (i.e., service providers and identity providers), [71] power that creates a situation of electronic Identity (eID) asymmetry. [72] This is probably a major challenge for the Revision Proposal on Data Protection that introduces specific accountability norms, but seems to be still trapped into a system of safeguards that risk being far from reality: for instance, the definition and distinction of controller and processor seems not to reflect the changing and blurring roles of the different actors online, including that of the user creating content and managing others' personal data. [73]

A right to digital identity should be acknowledged that would signify for the user:

  1. the right to access online services through modern e-ID (electronic identity) management systems, and meanwhile, to have one's own identity not misrepresented, not fragmented (and recomposed) at the mercy of dominant online companies;
  2. the right to create autonomously one's digital representation as individual, i.e. to construct ones' own digital persona (Leenes, 2008) , not necessary having it imposed by others - ISP - and over which the user retains control;
  3. the right to have access (and control) over one's own multiple e-IDs, guaranteed through the friendly processes of an identity management system;
  4. moreover, the right to use interoperable e-IDs, portable identity credentials across the different activities one may want to perform on Internet - as well as in ubiquitous computing [74]- with the mandatory guarantee that the different contextual dimensions of the digital persona are respected and the highest level of security applied;
  5. the right to erase the identity-related data or personal attributes (the so-called transactional data) on the choice of the individual (Mayer-Schonberger, V (2009); Whittington, J and Hoofnagle, C J, 2012);
  6. the right to have one's own digital identity protected across European boundaries in a uniform way and not to a varying degree; [75]
  7. the right to access and control metadata  (i.e., data on when, where, by whom the data were created) that are commonly considered by many Internet companies as their own records to retain and manipulate (Falkenrath, R, 2012);
  8. finally, and as a consequence, the right to express one's own opinions and ideas without constraints and fears about the consequences of an exposure oneself or of a misrepresentation of one's own identity/digital persona (the chilling effect mentioned above).

4. Accountability principle to be strengthened and new legal-technical normativity to be developed.

As mentioned above, against the background of a 'free Internet', the increasing dominant power of companies that operate online has been emerging over the last few years, a power that manifests itself in several manners (e.g. commercial, political or technological strategies). [76] One of the symptoms of this power is the way in which companies take decisions on users' data and on users' digital identity. [77] The attitude taken on the occasion of the launch of the FB' facial recognition functionality, and its purposes, could be attributed to this phenomenon of increasing power asymmetry. [78] Innovations in digital technologies and emerging new business models are making people's data more (economically) valuable and usable by others, without them realising it, and in the same time, making them more vulnerable. The Dutch DP authority (Mr. Kohnstamm) justifies in this way the new steep fines provided by the recent Dutch cookies law (implementing the amendments to the European e-Privacy Directive 136/2009/EC) in order to make sure that privacy will be respected. [79] In this context, it appears that the straightening and the implementation of the accountability principle are of paramount importance in the protection of users' rights. [80]

There is, generally speaking, a need to make the different actors involved in such data processing aware of the risks for individual rights and of the possibility of protecting them: the protection of rights should be 'embedded' in the mind of the different actors and they should be aware of their role under DP law in order to be able to administer their responsibility in an identity management network. Raab and Koops (2009) note that the design of data protection is fundamentally a political process, in which many interests and conflicts converge. In the landscape of privacy actors described, there is a wide range of diverse actors with their own responsibility and with many potential interconnections; as a consequence privacy is subject to a wide array of activities and policy-making, that end up being too fragmented and varied to be able to function well. [81] The range of actors risks, in fact, representing a dilution instead of a pluralism of regulatory activity, if there is no director to guide the actors. Therefore, in Raab and Koops' view, a strategy that aggregates the individual activities is needed. It is, thus, vital for privacy to be adequately protected, to make - as Raab and Koops (2009) say - some shifts among the 'company of players', which could do better 'with better direction and a better script, able to go beyond the individual characters'. [82]

It is submitted that these considerations are also applicable to digital identity issues. Therefore, among the range of actors, the government is, ideally, the most important actor able to undertake more responsibility in sustaining privacy, as well as a possible new identity right, to strengthen privacy/identity presence in policies, to provide more funds, to sharpen and coordinate the implementation of legal measures. However, governments, as they are currently, are not fully able to perform these tasks and also at international level their action is still too weak and 'intermittent'. [83] The Multi-level Governance (MlG) in the field of privacy protection, which involves public and private actors (such as the Global Network Initiative mentioned in the first case study) [84] could be an effective solution, but it is not possible, for the moment, to expect too much from it, and in any event, more research is still needed to understand the relationships between the various parts of the MlG and their contribution to the production of regulatory results. According to Raab and Koops (2009), the main risk with the MlG seems in particular to be that of 'losing sight of the contribution and the pre-eminence of the central-states, especially if one were to adopt the unsustainable position that the Internet is ungovernable, not in the least by state activity' (Raab and Koops, 2009). [85]

The other needed modification relates to the accountability of a different category of actors, that of technology developers: for too long the majority of them considered privacy as something that they did not have to take care about. Most technology that is marketed facilitates privacy-infringement more than privacy protection, as the trend of technology is to ease data collection and data matching (Koops B-J and Leenes R E, 2005). Following Raab and Koops' consideration, the solution cannot always be finding ex post countervailing technologies that would only benefit these technology developers. Privacy Enhancing Technologies and more widely identity-friendly technologies should become a stronghold in the legal framework, but they can operate successfully only if technology developers become rights-soaked or at least are more aware of the need to pay attention to privacy and identity issues in the development process (European Commission, 2010).

As asserted by Whitley and Hosein (2010), policies related to ICT for a modern society need to take into account the nature of technology involved, that pose specific challenges but can also be a key driver of innovative practices: an identity card, used as a token for the access to a service, with an age-related restriction should not need to disclose or even to contain unnecessary information regarding the subject. In other words, 'The verification of age only needs to be dependent on a simple Yes/No assertion linked to the identity of the person about whom the assertion is being made'. [86] That means that not even the date of birth needs to be disclosed. Interestingly, Whitley and Hosein (2010) argue that despite globalisation and the call for standardisation, identity policies require individualised and culturally sensitive solutions.Simplified, user-friendly technical mechanisms (Art. 29 WP, 2011), which may increase transparency and users' control over the disclosure of data, should, thus, be supported by research and policies, in order also that multi-organisation identity management (that is becoming the rule in the online world) could be set up in a compliant way (Olsen, T and Mahler, T, 2007; Reed, C, 2010). [87] There should be a preference for easily understandable privacy policies (machine-readable privacy policies) that still remain few in number.

Data controllers should be held more accountable for the breach of individual rights protection and the interests of citizens/users should be put first in relation to other concurrent interests: only in this way can a 'future in which identities are not overshadowed by identifications be ensured (Lyon, D, 2009). [88] These considerations are valid also and even more so with regard to the technologies developed to implement new functionalities (like that of facial recognition) on SNS.

It is sustained here that the fundamental rights and values that are involved in the use of new sophisticated functionalities cannot only be left to market powers and the self-interested decisions of online companies. They need to be guaranteed by a sound legal-technical framework that, in establishing users' digital-identity rights and obligations for the different actors involved, could be able to 'support' a higher level of user control.

Facing the risks arising from the increasing construction by the online companies of 'comprehensive user profiles' and basing on a renovated European legal model, a 'comprehensive response' for the protection of a user's identity should be provided. Possible responses already include privacy by design and privacy by default, but, in order to overcome the possible limits of these policies and of the related current legislation (as discussed above) a more comprehensive identity-by-default approach should be considered in the future. [89]

In this new context, a more user-controlled identity management might be developed (Hansen M, 2008). It is important, however, to underline that the burden of personal data and identity protection should not be deposited on the user and that users should not be considered the 'only' responsible and liable subject for the protection of their personal data and identity.  On this point, we share the view of Frank Dumortier (2008), according to which legal discourse must focus not merely on remedies and penalties that would only excessively and uselessly burden the young as old users, but 'on shaping an architecture to govern the multi-contextual data flows on the site', i.e., 'a SNS architecture able to prevent any unnecessary interference with privacy and data protection rights'. [90]

SNS users and, in particular,  young people seem to use personal data online and to restrict the related access in a multifaceted, differentiated way (according to the contexts and the people), showing a desire to express and organize themselves freely but also exhibiting practices to protect their 'private' online places (Andrade, N and Monteleone, S, forthcoming 2013). [91]  The users' choice regarding the distribution and appropriateness of these spaces (Nissenbaum, H, 2004) should be respected by SNS operators, starting by the design of their sites. [92] Hull, Lipford, and Latulipe (2011) sustain that 'contextual gaps' (as regards to the use of information in inappropriate or unexpected contexts) are endemic to Facebook and other SNS, but also that they are mainly design issues, ameliorable by an interface design that could increase transparency and control of information flow.

In particular, technical standards of identity protection should be embedded in the technologies, given the impact of their design over their use: 'reversible anonymity' mechanisms, transparency tools and erasure buttons (Bernal, P A, 2011), just to mention a few, should be put in place to allow the users to act with awareness of the digital environment in which they operate and of the consequences of their choice to be 'exposed' or not (i.e. of the consequences of the data processing regarding them). [93] For these technical tools to be effective, they should be able to provide users with straight and relevant information (through for instance images, signals, etc.), as well as with immediate control buttons (tick box), instead of inundating users with annoying and slowing down text.

It remains to be seen how the law will change as technology moves rapidly, but the law should still regulate new technologies, for the reason that they have impact on the individual and social dimensions of our private life (M Hildebrandt, 2008).

A new refined legal framework for refined technologies is expected. This refined legal framework might imply the adoption of a more comprehensive approach that addresses not only data protection and privacy rights (stricto sensu), but the digital-identity right as a whole, which could be capable of encompassing all the issues that the current framework does not consider (such as the processing of non-personal data, indirect profiling, blurring roles among controller and data subjects, obstacles to access to metadata etc.) or does not cover (commercial data processed for security purposes). Moreover, it could help in dealing with an apparent 'paradox' of the Information Society consisting in data proliferation, voluntary data disclosure on the one hand and the need for users' control on the other (Lusoli W et al, 2012): a comprehensive new approach should, in other words, address the emerging issues of electronic identity.

5. Conclusion

In the previous sections, some of the main challenges posed by the development of digital technologies to fundamental rights were considered.

Starting from the analysis of the Facebook case study - its arbitrary change of privacy default settings and the facial recognition functionality - the current situation of digital rights such as online privacy and data protection were examined.Given the limitations of the current data protection regime to cope with the new challenges encountered in the digital environment, there is a need for a new approach, able to provide individuals with legal-technical instruments to access and control not necessarily and not only the related personal data but their digital representation as a whole, the different digital profiles created online on the basis of the partial attributes of their personality.The analysis leads to the claim, that not only the current legal framework, but also the current legal approach needs to be reviewed in order to adapt to the rapid socio-technological changes and to countering the new threats that address the individual in his 'digital identity' as a whole.

After briefly discussing the concept of digital identity and its legal implications, it is suggested that a codification of a 'right to digital identity' is desirable in order to help users to enjoy fully the advantages of information technologies while keeping control of their digital representation. This 'right to digital identity' should receive formal acknowledgement and implementation according to a new legal-technical approach, enabling, on the one hand, the technology to develop in conformity with legal requirements; and, on the other hand, the law to be flexible and far-sighted in order to adapt to social-technological changes, but nevertheless strict when it is required to safeguard individuals' rights. If concretised, this right may provide some tangible measures and solutions for Internet users to the problems encountered in the scenarios described. If broadly conceived, this 'digital identity-oriented' approach could give control back to Internet users over the content they (or others) place online. The indirect effects of stronger protection of users' digital identity would also strengthen and facilitate other rights (e.g. freedom of expression) in relevant circumstances.

REFERENCES

Acquisti A, Gritzalis S, Lambrinoudakis C and di Vicemercati S (2008), Digital Privacy: Theory, Technologies and Practices, Auerbach Publications

Andrade N (2011) 'Data Protection, Privacy and Identity: Distinguishing Concepts and Articulating Rights', in Fischer-Hubner, S, Duquenoy, P, Leenes R, & Zang, G (eds) Privacy and Identity Management for Life: PrimeLife International Summer School, Helsingborg, Sweden, August 2010

Andrade N and Monteleone S (forthcoming 2013), 'Digital Natives and the metamorphosis of Information Society', in Gutwirth S. (ed), European Data Protection: coming of age (London: Springer)

Ariely D (2011), 'How online companies trick you into sharing more', Wired, August 2011, http://www.wired.co.uk/magazine/archive/2011/08/features/you-are-being-gamed?page=all

Art. 29 DP Working Party, Working Party on Police and Justice, 'The future of privacy, Joint Contribution to the consultation of the European Commission on the legal framework for the fundamental right to data protection', 1 December 2009, http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2009/wp168_en.pdf

Art. 29 Working (2003), Party 'Opinion on biometrics', 1 August 2003, http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2003/wp80_en.pdf ;

Art. 29 Working Party (2010a), 'Opinion on the applicable law' 8/2010, 16 December 2010, http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2010/wp179_en.pdf ;

Art. 29 Working Party (2010b), 'Opinion on behavioural advertising' 2/2010, 22 June 2010, http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2010/wp171_en.pdf )

Article 29 Working Party (2002), 'Opinion of on the use of unique identifiers in telecommunications terminal equipment', 30 May 2002, http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2002/wp58_en.pdf

Article 29 Working Party (2011), 'Opinion on the definition of consent', 15/2011, 13 July 2011, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2011/wp187_en.pdf

Article 29 Working Party (2012a), 'Opinion on facial recognition in online and mobile services', 02/2012, 22 March 2012, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2012/wp192_en.pdf

Article 29 Working Party (2012b), 'Opinion on developments in biometric technologies', 03/2012, 27April 2012

Ausloos, J (2012), 'The 'right to be forgotten'- worth remembering?', Computer Law and Security Review, 28

Author of a blog v. Times [EWHC], 2009, 1358 (QB)

Batchelor, C (2012), 'Privacy: Transatlantic tensions cast a pall over data sharing', Financial Times, 31 May 2012, http://www.ft.com/cms/s/0/b362e84c-a414-11e1-84b1-00144feabdc0.html#axzz1ydcaBbjX

Bernal, P A (2011), 'A Right to Delete?' European Journal of Law and Technology, 2

Bodoni, S, 'Facebook to be probed in EU for facial recognition in photos', 8 June 2011, http://www.businessweek.com/news/2011-06-08/facebook-to-be-probed-in-eu-for-facial-recognition-in-photos.html

Bradshaw, T (2011), 'Facebook addresses privacy issues in overhaul', Financial Times, 24 August 2011

Bradshaw, T (2012), 'Facebook buys Facial recognition group', Financial Times, 18 June 2012, http://www.ft.com/intl/cms/s/0/a27d93dc-a9b0-11e1-9772-00144feabdc0.html#axzz20EjczlUg

Cafaggi, F (2010), 'Private law-making and European integration: where do they meet, when do they conflict?' in Oliver D, Prosser T, Rawlings R, The Regulatory State (Oxford: Oxford University Press)

Camenisch, J, R. Leenes, D. Sommer  (2011) Digital Privacy, PRIME-Privacy and Identity Management for Europe (Berlin Heidelberg: Springer)

Daly A (2012) 'The AOL Huffington Post merger and bloggers' rights', European Journal of Law and Technology, Vol. 3, No. 3, 2012

De Hert, P (2007) A right to identity to face the Internet of Things, Unesco lecture of September 2007, http://portal.unesco.org/pv_obj_cache/pv_obj_id_51124C7402EB15FE1F11DF4AF9B56C1477A50400/filename/de+Hert-Paul.pdf .

de Hert, P and Papakonstantinou, V (2012), 'The proposed data protection Regulation replacing the Directive 95/45/EC: a sound system for the protection of individuals', Computer Law and Security Review 28

De Vries, K, De Hert P, and Bellanova R (2010) Proportionality overrides unlimited surveillance, CEPS, Liberty and Security in Europe

Deva, S (2007) 'Corporate complicity in Internet censorship in China: who cares for the Global Compact or the Global Online Freedom Act?' George Washington International Law Review, 39

Dumortier, F (2008), 'Facebook and Risks of 'De-contextualization 'of Information', in Gutwirth S, Poullet Y, De Hert P (ed), Data Protection in a profiled world (London: Springer)

EDPS, Opinion on the EU Commission, Communication A comprehensive approach on personal data protection in the European Union (2011/C, 181/01)

Electronic Frontier Fondation (2011), 'Freedom of expression, privacy and anonymity on the Internet', https://www.eff.org/sites/default/files/filenode//UNSpecialRapporteurFOE2011-final_3.pdf

European Commission (2010a), 'Communication on a comprehensive approach to data protection', cit. and EDPS, 'Opinion on the EU Commission, Communication, A comprehensive approach on personal data protection in the European Union' (2011/C 181/01)

European Commission (2010b), Communication from the Commission - A Digital Agenda for Europe. (COM(2010)245), http://ec.europa.eu/information_society/digital-agenda/links/index_en.htm

European Commission (2011), Eurobarometer 359/2011, 'The state of the e-Identity and Data Protection in Europe', http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf

European Commission (2011), 'The open internet and net neutrality in Europe',  http://ec.europa.eu/information_society/policy/ecomm/doc/library/communications_reports/netneutrality/comm-19042011.pdf

European Commission (2012), 'Proposal for a Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the movement of such data (General Data Protection Regulation)', 25/01/2012, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2012:0011:FIN:EN:PD ;

European Commission and the High Representative of the Union for foreign affairs and security policy (2011), Joint Communication 'A partnership for Democracy and shared Prosperity with southern Mediterranean', COM(2011)200, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2011:0200:FIN:EN:PDF

European Parliament, Resolution on a comprehensive approach on personal data protection in the European Union, 6 July 2011, http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P7-TA-2011-0323+0+DOC+XML+V0//EN ;

European Parlament, Study on 'Consumer behaviour in a digital environment', http://www.europarl.europa.eu/document/activities/cont/201108/20110825ATT25258/20110825ATT25258EN.pdf

European Telecoms Ministers, Granada Ministerial Declaration on the European Digital Agenda. Granada, 19 April 2010: Informal Meeting of Telecommunications and IS Ministers, 2010, http://www.eu2010.es/export/sites/presidencia/comun/descargas/Ministerios/en_declaracion_granada.pdf

Evans, D (2009), 'Online Advertising Industry: Economics, Evolution and Privacy', Journal of Economic Perspective;

Falkenrath, R (2012), 'Google must remember our right to be forgotten', Financial Times, 17 February 2012, http://www.ft.com/intl/cms/s/0/476b9a08-572a-11e1-869b-00144feabdc0.html#axzz1ydcaBbjX

Financial Times and others v. UK, 821/03 [ECHR] 2010

Fundamental Rights Agency's Report, 'Data Protection in the European Union: the role of National Data Protection Authorities (Strengthening the fundamental rights architecture in the EU II)', 7 May 2010, http://fra.europa.eu/fraWebsite/research/publications/publications_per_year/2010/pub_data_protection_en.htm

Hansen M (2008), 'User-controlled identity management: the key to the future of privacy', Int. J. Intellectual Property Management, 4

Hildebrandt M (2007), Profiling and the Identity of the European citizen in Hildebrandt M, Gutwirth S (eds), Profiling the European Citizen. Cross-disciplinary perspectives, (London: Springer)

Hildebrandt M (2008), 'Defining profiling. A new type of Knowledge' in Hildebrandt M and Gutwirth S (eds) Profiling the European citizen (London: Springer)

Goodwin v. UK 174880/90 ECHR [1996]; Hildebrandt, M (2010) 'Law at a Crossroads: Losing the Thread of Regaining Control? The Collapse of Distance in Real Time Computing' , International Conference on Tilting Perspectives on Regulating Technologies, December 2008, http://works.bepress.com/mireille_hildebrandt/9

Hildebrandt, M (2012), 'The dawn of a critical transparency right for the profiling Era', in Bus J, Crompton, M, Hildebrandt, M and Metakided G (eds), Digital Enlightenment Yearbook 2012 (Amsterdam: IOS Press)

Hildebrandt, M and Koops, B-J (2010) 'The Challenges of Ambient Law and Legal Protection in the Profiling Era' The Modern Law Review, 73

Hull, G, Lipford, HR and Latulipe C (2011), 'Contextual Gaps: Privacy issues on Facebook', Ethics and Information Technology, 4, http://ssrn.com/abstract=1427546;

Human Rights Act 1998

Jaquet-Chiffelle, D O (2008), Direct and indirect profiling in the light of virtual persons, in Hildebrandt M and Gutwirth S (eds) Profiling the European citizen (London: Springer)

King N and Vegener Jessen P (2010), 'Profiling the mobile customer- Privacy concerns when behavioural advertisers target mobile phones', Computer Law and Security Review, 26

Koops, B-J and Leenes, R E (2005), 'Code and the Slow erosion of privacy', Michigan telecommunications & technology law review 12

Le Métayer, D and Monteleone, S, 'Automated consent through privacy agents: legal requirements and technical architecture', Computer Law and Security Review, 25

Leenes, R E (2008) 'User-centric identity management as an indispensable tool for privacy protection', Int. J. Intellectual property management, 4

Lusoli, W and Compano, R (2010) 'From security vs privacy to identity: an emerging concept for policy design?', Info 6, Emerald G. Publ.Lim

Lusoli, W, Bacigalupo M, Lupianez F, Andrade N, Monteleone S, and Maghiros I (2012), 'Pan-European Survey of Practices and Attitudes and Policy Preferences as regards Personal Identity Data Management, EC-JRC Scientific and Policy Report EUR 25295 EN, http://is.jrc.ec.europa.eu/pages/TFS/eidsurvey.html;

Lyon, D (2009), Identifying citizens. ID cards as surveillance, Polity Press

Mayer-Schonberger, V (2009) Delete: the virtue of forgetting in the digital age (Princeton, N J: Princeton University Press);

Moerel, L (2012), 'Transnational corporate self-regulation of data protection' De Braw Blackstone Westbroek

Monteleone, S (2011) 'Ambient Intelligence and the right to privacy: the challenge of detection technologies', EUI Working Papers

Murray, A (2006), The Regulation of Cyberspace. Control on the online environment, (Oxford: Routledge)

Murray, A (2010), Information Technology Law. The Law and Society, (Oxford: Oxford University Press)

Mustafaraj, E, Metaxas P, Finn S, Monroy-Hernandez A, 'Hiding in Plain Sight: A Tale of Trust and Mistrust inside a Community of Citizens Reporters', 6° International Conference of Weblogs and social media (ICMW), June 2012, http://cs.wellesley.edu/~pmetaxas/mustafaraj_icwsm2012.pdf

Nissenbaum, H (2004), Privacy as contextual integrity, Washington Law Review, 79

Olsen, T and Mahler, T (2007) 'Identity management and data protection law: Risk, responsibility and compliance in 'Circles of Trust".Computer Law and Security Report. 23(4&5), available at http://dx.doi.org/10.1016/j.clsr.2007.05.009;

Out-law.com, 'EU privacy watchdogs say Facebook changes unacceptable', The Register 14 May 2010, http://www.theregister.co.uk/2010/05/14/facebook_privacy_rebuke/print.html

Palmer, M (2011a), 'Regulators probe Facebook's facial recognition', Financial Times 9 June 2011, http://www.ft.com/intl/cms/s/2/ffe3edb4-92c8-11e0-bd88-00144feab49a.html

Palmer, M (2011b),'Hamburg rejects Facebook facial recognition', Financial Times, 2 August 2011, http://www.ft.com/intl/cms/s/0/14007238-bd29-11e0-9d5d-00144feabdc0.html#axzz1gapIEPsv

Poullet, Y (2005), Comment 'Réguler la protection des données? Réflexions sur l' internornormativité, in Liber Amicorum Paul Delnoy (Brussels: Larcier)

Poullet, Y and Dinant, JM (2007), 'Towards new data protection principles in a new ICT environment', IDP, Revista de Internet Derecho y Politica 5/2007, http://www.uoc.edu/idp/5/dt/eng/poullet_dinant.pdf

Prensky, M (2001), 'Digital Natives, Digital Immigrants', On The Horizon. 6 MCB University Press

Raab, C and Koops, B-J (2009), Privacy actors, performances and the future of privacy protections, in Gutwirth S (ed), Reinventing data protection? (London: Springer)

Reed, C (2010), 'Information 'Ownership' in the Cloud' Queen Mary School of Law Legal Studies Research Paper No. 45/2010, http://ssrn.com/abstract=1562461

Reidenberg, J R (2000) 'Resolving Conflicting International Data Privacy Rules in Cyberspace', Stanford Law Review 52

Report of the Special Rapporteur, Frank La Rue (2011), on the promotion and protection of the right to freedom of opinion and expression, A/HRC/17/27, http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx

Report of the UN Special Rapporteur, Martin Scheinin (2009), on the promotion and protection of HR and fundamental freedoms while countering terrorism A/HRC/13/3;

Rodotà, S (2006), La vita e le regole. Tra diritto e non diritto (Milano: Feltrinelli)

Rouvroy, A (2008) 'Privacy, Data Protection, and the Unprecedented Challenges of Ambient Intelligence', Studies in Ethics, Law, and Technology, 108

Rouvroy, A (2008), 'Réinventer l'art d'oublier et de se faire oublier dans la société de l'information?', La sécurité de l'individu numérisé. Réflexions prospectives et internationales. Ed Stéphanie Lacour. Paris: L'Harmattan, http://works.bepress.com/antoinette_rouvroy/5

Rouvroy, A (2010), 'Governamentality in an Age of Autonomic Computing: Technology, Virtuality and Utopia', in M Hildebrandt, A Rouvroy (eds.), Autonomic Computing and the Transformation of Human Agency. Philosophers of Law meet Philosophers of Technology, (Oxford: Routledge)

S and Marper v United Kingdom 30562/04 [2008] ECHR 1581 (4 December 2008)

Sartor, G and Viola de Azevedo Cunha, M, (2010). 'The Italian Google-Case: Privacy, Freedom of Speech and Responsibility for User-Generated Contents', European University Institute, Working Papers

Schwartz, P M, 'Property, privacy, and personal data'. Harvard Law Review 2004, 117

Senguta S, 'Facebook likes become ads', The New York Times, 1 June 2012, http://finance.yahoo.com/news/facebook-likes-become-ads-101815773.html

Singer, N (2011) 'Just give me the right to be forgotten', The New York Times, 2011, 20 August 2011, http://www.nytimes.com/2011/08/21/business/in-personal-data-a-fight-for-the-right-to-be-forgotten.html )

Singer, N (2011), 'Face recognition makes the leap from sci-fi', The New York Times, 2011, 12 November 2011, http://www.nytimes.com/2011/11/13/business/face-recognition-moves-from-sci-fi-to-social-media.html

Solove D (2007), 'The future of reputation. Gossip, rumor and privacy on the Internet' (New Haven: Yale University Press)

Solove, D (2006), 'A Tale of Two Bloggers: Free Speech and Privacy in the Blogosphere'. Washington University Law Review, 84

Stevens, T, Elliot, J, Hoikkanen, A, Maghiros, I and Lusoli W(2010), 'The state of the Electronic Identity Market, JRC-IPTS Scientific and technical Reports,' http://ssrn.com/abstract=1708884

Sullivan, C (2011) 'Digital Identity, The role and legal nature of digital identity in commercial transactions', University of Adelaide Press, http://ssrn.com/abstract=1803920

Sustain, C (2001), Republic.com, Princeton: University Press

Tene, O (2011), 'Privacy: The New Generations', International Data Privacy Law 1

Tene, O (2012), 'Me, Myself and I: Aggregated and Disaggregated Identities on Social Networking Services'. Journal of International Commercial Law and Technology, http://ssrn.com/abstract=1959792

Vega, T (2011), 'Do not track' privacy bill appears in Congress', The New York Times, 6 May 2011, http://mediadecoder.blogs.nytimes.com/2011/05/06/do-not-track-privacy-bill-appears-in-congress/

White, A (2012),' German regulator suspends Facebook face-recognition probe', Businessweek, 8 June 2012, http://www.businessweek.com/news/2012-06-08/german-regulator-suspends-facebook-facial-recognition-probe

Whitley, E A and Hosein, G (2010) 'Global challenges for identity policies', Basingstoke, Palgrave Macmillan

Whittington, J and Hoofnagle, C J (2012) 'Unpacking privacy price', North Carolina Law Review 90

Wong, R (2010), 'The challenges facing social network', International Law and Management Review, vol. 6, http://works.bepress.com/rebecca_wong/10

Wright, D (2009). Privacy, trust and policy-making: challenges and responses. Computer Law and Security Review, 25

Zarsky, T (2004), 'Thinking Outside the Box: Considering Transparency, Anonymity, and Pseudonymity as Overall Solutions to the Problems of Information Privacy in the Internet Society', U. Miami Law Review 58

Zarsky, T (2008), Law and Online social networks: mapping the challenges and promises of user-generated Information flows, Fordham Int. Prop. Media & Entertainment law Journal 741



[1] A version of this paper was presented at the Human Rights in the Digital Era conference at the University of Leeds on 16 September 2011, along with Angela Daly.

[2] Shara Monteleone is Scientific officer at JRC-Institute for Prospective Technological Studies, European Commission, Seville, Spain. PhD (University of Florence); LLM (European University Institute and University of Florence). Disclaimer: the views expressed are purely those of the writer and may not in any circumstances be regarded as stating an official position of the European Commission. Acknowledgments: the author is grateful to Angela Daly for her friendship and patience and to the anonymous reviewers for their helpful comments and suggestions.

[3] Eurobarometer 359 (2011). See the further analysis and interpretation of this largest survey ever conducted in Europe in Lusoli W et al (2012).

[4] See Article 29 Working Party, 'Opinion on the definition of consent' (2011); 'Opinion on facial recognition in online and mobile services' (2012); 'Opinion on developments in biometric technologies (2012).

[5] See the European Commission Proposal for a 'General Data Protection Regulation' (2012), article 23.

[6] See, for Europe, Art 29 WP (2009), 'Opinion on social networking', and, for the U.S., EPIC and other privacy groups, which have made complaints against FB already in 2009 and 2010 to the FTC.

[7] http://epic.org/privacy/facebook/facebook_and_facial_recognitio.html ; http://www.guardian.co.uk/technology/2011/aug/03/facebook-facial-recognition-privacy-germany .

[8] For a definition of facial recognition see Art 29 WP, 'Opinion on facial recognition in online and mobile service' (2012): 'Facial recognition is the automatic processing of digital images which contain the faces of individuals for the purpose of identification, authentication/verification or categorisation1 of those individuals'[…]'Facial recognition constitutes an automated form of processing of personal data, including biometric data'.

[9] Art. 29 WP 'Opinion on facial recognition' (2012), p. 6

[10] Art 29 WP 'Opinion on the definition of consent'(2011), p. 24

[11] After the debate at European level on the need to modify the current confusing term 'unambiguous consent', and urged by the Art 29 WP Opinion on the definition of consent, the notion of 'explicit consent' has been introduced in the recent Proposal for a General Data Protection Regulation (article5, Recital 22), with the aim to reduce national divergences.

[12] The Art 29 WP's 'Opinion on the definition of consent' represents an interesting exercise of policy-making that adopts an integrated legal-technical approach: the recommendations are supported by practical (often technical) examples based on national best practice.

[13] See, inter alia, Rouvroy A (2008); Ausloos J (2012). Ausloos notes that the right needs better definition to avoid any negative consequences and proposes a right to be forgotten (as a way to make the consent more effective) limited to data processing situations where the individual has given her consent, combined with a public interest exception in order to allow individuals more effective control over their data.

[14] On the evolution in the interpretation of this right see Bernal P A (2011), who develops the concept of the 'right to delete': this would work extending data access rights (as a boost for the implementation of 'privacy by design'), forcing those holding data to justify why they are holding it and encouraging the development of business models that do not rely on holding of so much personal data (i.e., the reduction of data as the result of a change in business models).

[15] A clarification and the explicit codification of the right to be forgotten have been so urged in Europe since years that this right receives now a formal legal acknowledgment in the Proposal for a new Regulation, encompassing a right to obtain one's data erasure. A meaningful role in this process has been exerted by the European Parliament: see EP Resolution on a comprehensive approach on personal data protection in the European Union, of 6 July 2011, http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P7-TA-2011-0323+0+DOC+XML+V0//EN .

[16] An important step towards a regime that empowers U.S. consumers seems to have been taken recently (May 2011)  by the FTC, which endorsed the principle that company should collect only the data they need about people and keep it no longer than necessary (www.thenewyorktimes.com/2011/08/21/business/in-personal-data-a fight-for the right to/). It appears that the initiative of the FTC to introduce more limits on data use came parallel to the introduction in U.S. of the so called 'Do Not Track Kids Act' in May 2011, which would include an 'eraser button' provision for children and parents and would require companies, when feasible, to allow users to delete publicly available personal data about minors from a website; see Singer N. (2011).

[17] Falkenrath, R (2012), 'Google must remember our right to be forgotten', Financial Times, 17 February 2012, http://www.ft.com/intl/cms/s/0/476b9a08-572a-11e1-869b-00144feabdc0.html#axzz1ydcaBbjX .

[18] The compliant is available at http://epic.org/privacy/facebook/EPIC_FB_FR_FTC_Complaint_06_10_11.pdf

[19] See the Hamburg authority's formal letter of 2 August 2011 with which he requested that FB disable its facial recognition software and delete any previously stored data; press release available at http://www.pcmag.com/article2/0,2817,2390440,00.asp.

[20] O'Brien K J, Germans Reopen Investigation on Facebook Privacy. New York Times, 15 August 2012.

[21] A relevant change occurred during the publication process of this paper with regard to the facial recognition functionality that must be reported. As a consequence of the release of the Art 29 WP' Opinion on biometrics, of the Hamburg commissioner's investigation and especially of the recommendations made in the Irish Audit to FB, the facial recognition feature has been turned off in the European Union for new users and templates for existing users will be likely deleted by 15 October 2012, http://dataprotection.ie/viewdoc.asp?Docid=1233&Catid=66&StartDate=1+January+2012&m=n . The commitment of FB to best practice in data protection compliance is a positive signal from the social network site; however, it leaves unsolved many of the issues reported in the above text, for instance as regards to the non-European users or to the European users travelling outside Europe, or simply having a non-European 'networking'.

[22] See the complaint lodged by EPIC before the FTC of U.S. against Facebook, the 5th of May 2010, http://epic.org/privacy/facebook/EPIC_FTC_FB_Complaint.pdf.

[23] The main purpose seems to be advertisement. The 'Like' function not only allows Facebook (or rather its algorithms) to know what its members 'like', but also to convert a user's empathy for something into an advertisement displayed to the user's friends: it has to do with the so-called 'sponsored stories', a lucrative tool for the SNS that has been paid by companies, like Amazon, to generate these automated ads every time a user 'like' their products. See Senguta, S (2012).

[24] On this issue, it is worth mentioning the innovative interface that a German technology news site (Heise) adopted, changing its FB 'Like' button with a two-click format, capable to disable the automatic tracking of all the page visits by third-parties social network sites: instead of the usual FB button, a greyed-out button would appear, so that, if the user wanted to share or 'like' a content, he/she should click on this additional button to enable the original FB one. Given the fact that usually the SNS can track and know the web pages visited by its member, whether or not he/she clicks its 'Like' button, with this 'double door' technique, the behavioural tracking does not work, or at least it is limited. This example is available at http://yro.slashdot.org/story/11/09/03/0115241/Heises-Two-Clicks-For-More-Privacy-vs-Facebook ).

[25] The main reason lies in attempting to stop the use of the FB plug-in to track users, at least for two years; the suggestion/preference shown through the 'Like' button would entail a map of FB members' 'social activities', viewable by the same websites and by FB. There are, in the German authority's view, enough objectionable factors in this FB plug that contrast with the European and national data protection legislation and that urge such a restrictive measure. More info available at: http://www.ibtimes.com/articles/201039/20110820/germany-imposes-ban-facebook-like-institutions-state-schleswig-holstein-shut-down-plug-ins-protectio.htm .

[26] On the different kinds of tracking technologies (among which cookies are most commonly used by online companies, for instance, for behavioural advertising) see the Art 29 WP 'Opinion on behavioural advertising' (2010b): SNS use targeting technology for online behavioural advertising, allowing SNS users to be tagged through their interests.

[27] See Rouvroy A (2008). Accordingly, the right to privacy, with its components of self-determination and decisional autonomy, became an instrument to ensure the individual's capability to exercise other fundamental rights and values of democratic society: 'Capacity for both reflexive autonomy allowing to resist social pressures to conform with dominant drifts, and for deliberative abilities allowing participation in deliberative processes'.

[28] On users' identity protection attitudes, see Lusoli, W, Bacigalupo M, Lupianez F, Andrade N, Monteleone S, and Maghiros I (2012) 113, from which it emerges that 'protection behaviour rests on passive use of existing tools rather than on active strategies of information control. This may also imply that where these tools are not available, or are cumbersome to use for the average user, people are unlikely to take proper care of their personal data online'.

[29] As emerges from the Eurobarometer 359/2011, users of sharing sites or SNS and shopping sites do not necessary coincide, but, on the contrary, are often quite different in terms of age, occupation, geographical provenience. FB now belongs to both categories, though for the moment it works more as a market window for product purchase.

[30] What the nature of these obligations should be (state, co-regulation or self-regulation) is another matter; on the topic see Cafaggi F (2010).

[31] On the emerging attitudes and behaviour with regard to the protection of personal data of the so-called Digital Natives see the Eurobarometer (2011, p. 4); see also the European projects PRACTIS (http://www.practis.org/) and EU Kids Online ( http://www2.lse.ac.uk/media@lse/research/EUKidsOnline/Home.aspx). On the definition of Digital Natives, see Prensky M (2001). For a discussion on the need to rethink the European legal system according to the needs and perceptions of Digital Natives, see Andrade N and Monteleone S (forthcoming 2012).

[32] Schwartz P M (2004); for a critical analysis see Le Métayer D and Monteleone S (2009).

[33] A meaningful example of self-regulation in the context of SNS is represented by the Safer Social Networking Principles, which have been developed by SNS providers in consultation with the European Commission, available at ec.europa.eu/information_society/activities/social_networking/docs/sn_principles.pdf.

[34] See also the 'e-reputation' issues as interpreted by the French DP Authority, CNIL, at:  http://www.cnil.fr/dossiers/internet-telecoms/fiches-pratiques/article/les-entreprises-de-reputation-en-questions/.

[35] The European Commission Proposal of the new DP Regulation encompasses the modification of the definition of personal data and data subject (see art 2 of the Proposal), but it will probably remain enforceable as regards the aforementioned processing of non-personal data, which constitutes often the basis of dangerous, profiling practices.

[36] Hildebrandt, M (2012), The Dawn of a critical transparency right for the profiling era, in Bus J, Crompton M, Hildebrand M, Metakides G (ed) Digital Enlightenment Yearbook 2012 (Amsterdam: IOS Press), 51,52. She notices that the current data protection framework lacks adequate protection with regard to the application of group profiling: the fact that such profiles are generally protected by means of trade secret or intellectual property turns the legal right to access to the logic of processing into an empty shell. The draft Regulation introduces relevant changes on this regard, obliging the controllers to inform the users about the envisaged effects of such processing, including, of the automated decision-making; however, it will provide us with effective remedies only if a proper technological and organizational infrastructure is put in place by industry that enables compliance with the enhanced transparency obligations.

[37]

[38] Report of the Special Rapporteur La Rue (2011 p. 22): 'The Special Rapporteur is concerned that, while users can enjoy relative anonymity on the Internet, States and private actors have access to technology to monitor and collect information about individuals' communications and activities on the Internet. Such practices can constitute a violation of Internet users' right to privacy, and undermine people's confidence and security on the Internet, thus impeding the free flow of information and ideas online'.

[39] See article 17 of the ICCPR: 'No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence'.

[40] See also the Report of the UN Special Rapporteur, Martin Scheinin (2009):'The Special Rapporteur recommends that any interference with the right to privacy, family, home or correspondence by an intelligence agency should be authorized by provisions of law that are particularly precise, proportionate to the security threat, and offer effective guarantees against abuse'.

[41] Mention should be made to the current debate on the so-called cookies laws, national implementations of the European directive n. 139/2009, entered into force in May 2012 and that introduced more restrictive measures on the use of cookies by online service providers without users' consent; the number of European countries that have already implemented the directive is still limited (UK, Netherlands, France, Italy).

[42] For instance, we can find several cases in which the European Court of Human Rights (ECtHR) has held that there was a breach of the right to freedom of expression and in the same time of privacy in the case of disclosure orders of anonymous sources addressed to a journalist, missing an overriding interest to know the identity of the source of information, and given that these orders would have a detrimental impact not only on the identity of the source in question, but also on the journalist and the related newspaper (Goodwin v. UK  [ECHR] 1996; Financial Times and others v. UK, 821/03, [ECHR] 2010). However, these cases mainly relate to a specific public activity, that of journalism. In the UK NightJack case, (Author of a blog v. Times [EWHC], 2009), instead, though the claimant was not a journalist, his activity as an anonymous blogger was considered as a public activity and for this reason hiss claim for anonymity protection has been rejected.

[43] Solove, D (2007), The future of reputation. Gossip, rumor and privacy on the Internet (New Haven: Yale University Press) 146.

[44] The recent Proposal of the European Commission for a General Regulation on Data Protection should be applicable to all companies that do business in Europe (including SNS). Although many of the new provisions appear to be innovative (such as the codification of the right to be forgotten, the right to data portability, the general obligation to notify data breaches and to provide a DP Impact Assessment), doubts about the enforceability of several norms still persist (e.g., of the right to be forgotten and to erasure as regards the cases of 'hidden' collection of data). However, a legal instrument that is contemplated in the forthcoming Regulation and which probably requires closer attention in the near future seems to be the Binding Corporate Rules (BCR) for data protection, an example of co-regulation that would help in the harmonisation and in the implementation of the legal provisions.

[45] For a recent study on anonymity in social network communities and its relevance for 'citizens-reporters' in particular contexts see Mustafaraj E, Metaxas P, Finn S and Monroy-Hernandez A (2012). For the first time the analysis of the practices of a community of Twitter citizens reporters in a life-threatening environment is presented: 'In a time when social networking platforms such as Facebook and Google+ are pushing to force users to assume their real-life identities in the Web, we think that it is important to provide examples of communities of citizens for which maintaining their anonymity inside such networks is essential'. The paper is interesting also for the identity perspective. Given that being anonymous leaves the other members of community in the dilemma of who to trust, the paper reveals how, inside a community, anonymous individuals can establish recognisable identities that they can sustain over time: 'Such anonymous individuals can become trustworthy if their efforts to serve the interests of the community remain constant over time'.

[46] There have been many criticisms of Google's recent decision to identify its users across all of its services and to integrate this data across all Google's services (Youtube, Gmail, Picasa etc.): see Falkenrath R (2012).

[47] In particular, in Hildebrandt (2008)'s view, personal identity is a mix of both aspects of ipse and idem identity: ipse (self) identity is the irreducible sense of self of a human person, the reflexive consciousness of oneself; idem (sameness) identity is the objectification of the self that stems from the comparison with the others (e.g., social, cultural, legal identity): both illustrate the relationship between me and the others, through which the personal identity develops (we are constantly comparing ourselves with others, resembling the other -through identification- and differentiating oneself - in a perspective of individualization. Personal identity cannot flourish when ipse identity-building is deprived from the idem identity input. People will abstain to act for the fear to be negatively profiled.

[48] De Hert, P A right to identity to face the Internet of Things, Unesco lecture of September 2007, 21,

http://portal.unesco.org/pv_obj_cache/pv_obj_id_51124C7402EB15FE1F11DF4AF9B56C1477A50400/filename/de+Hert-Paul.pdf .

[49] See, inter alia, Leenes R E (2008); Rodotà S (2006), Andrade N (2011).

[50] In non-distributive group profiling, not all members share all the attributes of the group profile, but a person who is ascribed to one of these profiles is said to be, statistically, a person who will have certain behaviour (whatever we deal with marketing preferences or criminal attitude): in this sense are said predictive profiles. See D O Jaquet-Chiffelle (2008). Accordingly, indirect profiling (e.g., the profile applied to a user is based on other users, like the Amozon's personalised offers) is less reliable then a direct one, since it bases on generalisation, with the consequence to be applied to subjects that do not share the same attributes.

[51]          Particular threats seem to derive from the data-mining techniques, used to create users profile. See Hildebrandt M (2008), who defines data mining as 'the procedure by which large databases are mined by means of algorithms for patterns of correlations between data […].What they provide is a kind of prediction, based on past behaviour; in this sense profiling is an inductive way to generate knowledge'. Given the possible type of knowledge that profiles generate and the possible use that can be made of this knowledge, there is an increasing interest for data mining, as 'knowledge is power'.  Automated profiles (as group profiling, used to select people as members of a group with some features in common) could bring to discriminatory classification of people (and exclusion of others), towards which the anti-discrimination law (e.g., Art 14 ECHR) could not suffice, especially in case of indirect discrimination (i.e. derived from a place of residence, meal preferences, etc.), not so difficult in SNS contexts. More in general, the application of a non-distributive group profile to a member of a group could bring about a wrongful attribution to him/her of products preferences (as well as of suspect attitudes).

[52] Though a reform of the European DP directive is already on-going (Proposal for a DP Regulation), aiming at updating several norms in accordance with the technological development (and including a specific norm on profiling), its vision does not seems to be able to cover many of the aforementioned issues of non-distributive group profile based on non-personal data, relevant for the user's digital identity.

[53] See Ariely D (2011): '[FB's] Walls are basically a storefront window of the self'.

[54] On profiling issues see also Hildebrandt M and Koops B-J (2010).

[55] See inter alia King N, Vegener Jessen P (2010); Art 29 WP (2010).

[56] On the increasing merging of private and public purposes in the processing of data by different actors (commercial companies and law enforcement authorities) and on the augmented risks for the privacy and identity of the individuals see the example of the Passengers Names Records (PNR), processed for security purposes, though collected by airline companies in de Hert, P and Papakonstantinou, V (2012) and Monteleone, S (2011).

[57] See the definition of biometrics data from Art 29WP (2012b, p. 3): 'biological properties, behavioural aspects, physiological characteristics, living traits or repeatable actions where those features and/or actions are both unique to that individual and measurable, even if the patterns used in practice to technically measure them involve a certain degree of probability. Biometric data changes irrevocably the relation between body and identity, because they make the characteristics of the human body 'machine-readable' and subject to further use'.

[58] During summer 2011, the launch in the U.S. of an I-phone with facial recognition system available for police identification has attracted much criticism: it is not difficult to imagine the invasive nature of this measure if linked with other similar technical functionalities available online, including the automatic tagging system introduced by Facebook ( http://blogs.wsj.com/digits/2011/07/13/how-a-new-police-tool-for-face-recognition-works/ ; http://www.repubblica.it/tecnologia/2011/07/13/news/foto_fedina_penale-19084136/ ) .

[59] In the author's view, the concept is connected but different from that of Identity by design (Lusoli W and Compano R, 2010). Like privacy by design (as distinct from privacy by default), identity by design is more related to the technical standards development, while identity by default addresses the issue of protecting and respecting the digital persona chosen and controlled by the individual: it is envisaged here, as a matter of user empowerment in the decision-making process regarding the representation of her digital identity (supported and accomplished by identity by design mechanisms).

[60] See Olsen T and Mahler T (2007, p. 7), who note that new developments in identity management systems are driven by actors with different interests in mind: at one end of the spectrum are the business-driven schemes (which often neglect the interest of individual users) and at the other end of the spectrum is user-centric identity management, which seeks to raise the degree of control the user has over his digital identity and to enable the user to decide autonomously about the disclosure of his personal data.

[61]   See the Granada Declaration and the Digital Agenda for Europe, in which eID is seen as central for the development of the Single Market: EU Telecoms Ministers. Granada Declaration (2010); European Commission (2010b).

[62] This is of prime importance for the European Single Market and its sustainability, see Stevens, T, Elliot, J, Hoikkanen, A, Maghiros, I and Lusoli W, 2010. As it emerged from the Eurobarometer (2011), uncertainty about consumer rights caused 44% of these consumers to abstain from participating in e-commerce and in particular in cross-border e-commerce. Among the consumer's concerns there is the lack of harmonised legislation on data protection laws (90% find it important that data protection and privacy rights are the same in all EU countries). Based on these percentage, a recent study commissioned by the European Parliament (2011), suggests to 1) improve the awareness of consumers of current consumer protection and to 2) increase harmonisation of legal framework in order to strengthen the level of trust in online transactions.  

[63] According to Lusoli W, Bacigalupo M, Lupianez F, Andrade N, Monteleone S, and Maghiros I (2012, p. 67), for instance, the perception of risks connected to SNS disclosure by EU users is higher with regard to the use of their information without their knowledge, followed by fraud and by the risk of information being shared by third parties.

[64] According to Olsen and Mahler this would occur at least at three stages. First, the use of information technologies in the design of identity management systems and the adoption of identifiers (especially unique identifiers) can reduce users' control over personal data (Art 29 WP, 2002 ); second, the collaboration among different organisations make even more complex the already existing  compliance issues with data protection principles (e.g. purpose limitation, responsibility in case of breach, etc.); third, the interaction with the end-user becomes an essential element in identity management, as on this aspect the user's trust in the service itself could depend.

[65] This is what Art 29 WP has stressed in its recent Opinion on the definition of consent (2011). The new Proposal for Reform introduces a slightly new wording ('explicit consent'), though, it is doubtful that it is going to be effective. The instrument of consent itself, in fact, has already shown its shortfalls, especially in view of new digital technologies, ubiquitous computing and their use by a new generation of users, towards which it risks to work more as a burdensome instrument of protection.

[66] The adoption of multinational, binding corporate rules ( http://ec.europa.eu/justice/data-protection/document/international-transfers/binding-corporate-rules/index_en.htm ) regarding technical shared solution could also have benefits in terms of effectiveness, though a proper insight (by each state or by a supranational authority) on these rules should be ensured.

[67] For instance, with regard to the information obligations, interoperable technical tools would offer better and automated information to the users, as well as better interaction with the user that would be enhanced by increasing the user's awareness and control over her personal data and also regarding his/her meta-data (as stressed by Olsen and Mahler, 2007), and finally her digital identities. This would be particularly relevant for future cross-border and interoperable eID systems, the achievement of which is one of the objectives of the new Digital Agenda for Europe. EU-funded projects (like STORK, www.eid-stork.eu/) aim at designing interoperable systems across EU borders.

[68] In the author's view, the concept is connected but different from that of Identity by design (Lusoli W and Compano R, 2010). Like privacy by design (as distinct from privacy by default), identity by design is more related to the technical standards development, while identity by default addresses the issue of protecting and respecting the digital persona chosen and controlled by the individual: it is envisaged here, as a matter of user empowerment in the decision-making process regarding the representation of her digital identity (supported and accomplished by identity by design mechanisms).

[69] See the reflections on the relevance of metadata for Internet company's empowerment over users' data in R Falkenrath, 'Google must remember our right to be forgotten', Financial Times, 17 February 2012, http://www.ft.com/intl/cms/s/0/476b9a08-572a-11e1-869b-00144feabdc0.html#axzz1ydcaBbjX .

[70] Wong, R (2010), 'The challenges facing social network', International Law and Management Review, vol. 6, 152, http://works.bepress.com/rebecca_wong/10.

[71] See Dumortier, F (2008, p.119), who explored the threats of 'de-contextualisation' of data perpetrated by Facebook and of the loss of control 'on a partial projection of someone's identity, which is extremely contextual and relation'.

[72] On the different but related issue of eID market asymmetry, generated by the divergent utility functions of users and service providers in relation to eID data (the latter are able to extract economic value from users data) see Lusoli W and Compano R (2010). As a consequence, the users would have a passive role also in the market of the value generated by users' own identity data.

[73] See on this aspect Tene O (2011). The lack of clarity on the precise role and responsibility of these actors may lead to a rigid application of the data protection rules, like what seemed to have occurred with some national courts, as noted by Sartor, G & Viola de Azevedo Cunha, M, (2010)

[74] See among others, E Aarts & S Marzano (2003). The New Everyday. Views on Ambient Intelligence. Rotterdam: 010 Publishers; D Wright, S Gurtwirth, et al. (2008). Safeguards in a world in Ambient Intelligence, London, Springer;  D Le Metayer & S Monteleone (2009). Automated consent through privacy agents: legal requirements and technical architecture. Computer Law and Security Review, 25 (2), 136-144.

[75] See Lusoli W and Compano R (2010 p. 87), who notice that the current DP legislation dates from well before the mass use of the Internet and that it contains many rules that are aimed at a 'static' Internet: while the existing legislation does not deal with identity per se, its application has meant the regulation of specific aspects of one's identity (e.g. health), with the consequent reality that citizens in different countries have their identity rights protected to a varying degree.

[76] On the ethical and legal issues related to the use and commodification of user-generated content by for-profit online intermediaries, see Daly, A (2012).

[77] The famous image of the two dogs in front of a computer, differently interpreted by the New Yorker cartoon (a dog says to another one: 'On the Internet, no one knows you are a dog') and by Tole's cartoon (in which the computer's screen faults the dog's belief, displaying biographic details, food preferences and websites visited by the dog, though not the name), represents well the changes occurred on the Internet due to the modern profiling practices and the lack of awareness that they exist (Leenes R E, 2010; Camenisch J et al, 2011).

[78]   As reported recently by many newspapers (Financial Times, 27 July 2011) FB and telecoms companies such as Vodafone have started to team up, planning to create a special smartphone that would enhance the FB functionalities, targeting inevitably younger customers ('F-commerce'). It will be a FB phone, and is billed as the first FB-centric device for pre-pay customers. A shift can be noted from the perspective of 'user-centric devices', as urged for years by many scholars and by the European law-makers (the European Commission, 2010a), to the current perspective of 'companies-centric devices'.

[79] The new Dutch law, for example, will provide the possibility of penalising big companies such as Google or FB with fines of several million euro (info available at http://www.rnw.nl/english/article/new-dutch-law-deter-privacy-breaches). The introduction of more severe and dissuasive sanctions, including criminal sanctions appears to be one of the recurring themes of the new comprehensive approach to DP, as testified to also by the EP's last resolution of the 6 July 2011: 'the Parliament considers of utmost importance that data subjects' rights are enforceable and…calls on the Commission to provide in its legislative proposal for severe and dissuasive sanctions, including criminal sanctions'.

[80] See European Commission (2010) that considers the implementation of accountability principle as one of the tool for strengthening individual rights.

[81] Raab and Koops (2009) enumerate as actors governments, international organisations, legislators, academics, activists, media, and the citizens and consumers themselves. It is interesting what they say about the current national and supranational laws: many laws are drafted that are privacy-unfriendly, not only because legislators seem to pay less attention to privacy and DP as compared to other interests (e.g. national security), but also because the effect of privacy and DP legislation is limited in regulating technological changes that have privacy (and identity) implications especially in the Internet and global information flows. See also Koops B-J and Leenes R E (2005).

[82] This consideration leads to the question of the extent to which DP Authorities have effective and 'landscape' control over the IT businesses and whether it would not be necessary to empower them (or a future single European Authority) in their supervisory tasks. See the Fundamental Rights Agency's Report (2010).

[83] In some countries, privacy and freedom of expression are threatened by an authoritarian government (China, Iran) of which some IT businesses have been deemed to be accomplices, see Deva S (2007).

[84] The Global Network Initiative (available at http://www.globalnetworkinitiative.org/faq/index.php) has already been mentioned in the first part of this paper. Civil society, NGOs, industry, academic world and private companies such as Google, Yahoo! and Microsoft are participants in this initiative. A sector in which the resort to transnational regulation (of a public/private nature) appears particularly meaningful but slowly growing is exactly that of Information Communication technologies (ICT) and media, due to the transnational nature of the data flow and of services that the national orders need to face. See Coen, D (2008).  One of the main limits is identified in the 'voluntary basis' of initiatives similar to the GNI, but this aspect would not necessary entail a lack of binding force: see Cafaggi F (2010), who points out that better coordination mechanisms between private and public law-making across multiple levels are needed, together with a new legal framework that increase the legitimacy of the private law-making and avoid distortion in the distribution of law-making power. On the role of transnational corporate self-regulation in the field of data protection, see Moerel L (2012).

[85] See moreover Reidenberg, J R (2000), who stresses that the normative governance role of information privacy rules will remain structurally divergent in the international landscape (that does not include the European-style political integration objectives, i.e., the social-protective perspective); these divergences make international cooperation imperative for effective data protection, especially in cyberspace. Instability and uncertainty in the data protection would be dangerous to international data flow and the development of the online community.

[86]

[87] According to the results of the PRACTIS project, the main threats to privacy induced by Internet companies are related to the lack of transparency towards users; we share the policy implications of these findings, with regard to the claim for the role of States in regulating privacy by design for businesses, for instance 'imposing minimal standards on services and products'. See PRACTIS Deliverable D3.4 final, 6 ( http://www.practis.org/).

[88] Strengthening the role of organisations and controllers according to a clearer accountability principle is envisaged by the EDPS as a necessary condition of the future DP legal framework (data protection measures in business processes), see EDPS (2011). For instance, an identity-data breach notification could be envisaged for SNS, similar to that currently provided only for the telecommunications sectors.

[89] Since, as emerges from Eurobaromenter (2011), private credentials are increasingly used more than public (government-issued) credentials, a practical application of this comprehensive identity approach, could be for instance, the choice of a disclosure system based on third parties credentials, rather than on the direct disclosure of credit related information, and other ways to connect the virtual identity with the real one; see on this point, Lusoli, W et al (2012).

[90] Dumortier, F, 2008, p. 136.

[91] As emerged from the PRACTIS project Deliverable D3.4 final, 6 (http://www.practis.org/) and from the Eurobarometer (2011, p. 95) many Internet and SNS applications are designed in a way that is not respecting personal choices made by users because of privacy. Moreover, where the possibilities to control privacy are available young people use them.

[92] See Nissenbaum H (2004), who talks about norms of contextual norms of appropriateness and of distribution of information that guide the behaviour of people in the different contexts.

[93] On the concepts of 'Transparency Enhancing Technologies' (allowing citizens to anticipate how they will be profiled and the consequence of that) and of 'Legal Transparency by Design' see M. Hildebrandt (2012).


BAILII: Copyright Policy | Disclaimers | Privacy Policy | Feedback | Donate to BAILII
URL: http://www.bailii.org/uk/other/journals/EJLT/2012/03-3/168.html