Search interesting materials

Friday, April 10, 2020

Comments on the draft Personal Data Protection Bill, 2019: Part II

by Rishab Bailey, Vrinda Bhandari, Smriti Parsheera and Faiza Rahman.

In our previous post, we had discussed some of the concerns arising out of the draft Personal Data Protection Bill, 2019 (the "Bill"), focusing on how the State-citizen relationship is dealt with under the Bill. We examined the provisions granting wide ranging exemptions to the State for surveillance and law enforcement purposes, as well as the problems in the design and functioning of the proposed Data Protection Authority of India (the "DPA"). In this post, we extend our analysis to discuss certain other issues with the Bill, including the provisions on data localisation, processing of children's data, implementation of privacy by design and regulatory sandbox, inclusion of non-personal data, the employment exception, and the research exemption. We argue that these provisions need to be amended in order to provide more effective safeguards for the privacy of individuals.

Cross Border Data Transfer (Data Localisation)

One of the most contentious issues in the drafting of India's privacy law has been the issue of data localisation, or in other words, the nature and scope of restrictions that should be applied to cross-border data transfers.

Section 33 of the Bill enables the transfer of personal data outside India by imposing transfer restrictions on two sub-categories of personal data. The first sub-category consists of sensitive personal data, such as financial data, health data, sexual orientation data, biometric data, etc., that has to be mirrored in the country, i.e. a copy of such data will have to be kept in India. The second sub-category consists of critical personal data (which has not been defined in the Bill), and which is barred from being transferred outside India. The constituents of this sub-category have not been identified in the Bill and are left to be notified by the Government at a subsequent stage. While imposing these restrictions, the Bill also specifies (in Section 34) a list of conditions that can enable a cross-border data transfer to take place. This includes determination of the adequacy of the laws of another country by the Government or requirements for data processing entities to put in place intra-group schemes or contracts to ensure appropriate standards for the protection of Indian data sent outside the country.

These provisions are significantly more liberal than those proposed in the 2018 version of the draft Data Protection Bill released by the Justice Srikrishna Committee ("PDP Bill, 2018"). The PDP Bill, 2018, required both personal and sensitive personal data to be mirrored in the country, subject to different conditions and exemptions. These provisions attracted significant criticism -- from dissenting members of the Srikrishna Committee, to technology companies (particularly multinationals), as well as sections of civil society (Basu et al., 2019). We had also argued in our submissions on the PDP Bill, 2018 that these restrictions were overly broad and that the costs of strict localisation measures may outweigh any possible gains.

The move to liberalise these provisions will undoubtedly be welcomed by many stakeholders. The less stringent provisions of the Bill imply that costs to business may be limited, and users will have greater flexibility in choosing where to store their data. Prima facie the Bill appears to reflect a more proportionate approach to the issue, thereby bringing it within the framework of the Puttaswamy tests of proportionality and necessity (Bhandari et al., 2017). This is achieved by implementing a sliding scale of obligations, ostensibly based on the sensitivity or vulnerability of the data -- "critical personal data", being the most vulnerable category, is required to be localised completely; while "personal data" being the broadest category, can be freely taken out of the country. The obligations with respect to "sensitive personal data" lie in between these two.

However, we believe that even the revised provisions of the Bill may not withstand the test of proportionality.

As explained by us previously on this blog, there are broadly three sets of arguments that are advanced in favour of imposing stringent data localisation norms (Bailey and Parsheera, 2018):

  1. Sovereignty and Government functions: Referring to the use of data as a resource to be used to further India's strategic and national interests, to enable the enforcement of Indian laws and discharge of other state functions.
  2. Economic benefits: The second claim is that economic benefits will accrue to local industry in terms of creating local infrastructure, employment and aiding development of the artificial intelligence ecosystem.
  3. Civil liberties: The third argument is that local hosting of data will enhance its privacy and security by ensuring Indian law applies to the data and users can access local remedies. It will also protect (Indian) data from foreign surveillance.

If the Bill was localising data for the first two purposes, it would have required that local copies be retained of all the categories of personal data, as was the case with the previous draft of the law. On the other hand, if privacy protection is the main consideration, as it now appears given the changes from the PDP Bill, 2018, and the fact that vulnerability or sensitivity of the data is the differentiating factor in terms of the obligations being imposed, we believe that the aims of this provision can be equally achieved through less intrusive, suitable and equally effective measures. This includes requirements for contractual conditions, and using adequacy tests for the jurisdiction of transfer, as already provided for in Section 34 of the Bill. This is also in line with the position under the European General Data Protection Regulation ("GDPR"). Further, the extra-territorial application of the Bill also ensures that the data protection obligations under the law continue to exist even if the data is transferred outside the country.

In case data localisation is meant to serve any of the goals other than privacy, sectoral obligations can be used to meet these specific objectives based on a perceived and specific need. This is already the case in respect of digital payments data, certain types of telecom data and Government data. Any such move would of course have to be preceded by an open and transparent process setting out the problem that is sought to be addressed and assessing the different alternatives before arriving at localisation as a solution.

Given the infirmities in the Bill, particularly concerning the powers of the State, individuals and businesses may well believe that their data would be more secure if stored and processed in jurisdictions with strong data protection laws and a more advanced technical ecosystem. Therefore, assuming that privacy is the primary motivating factor behind design of this provision, it would make sense to allow individuals to store their data in any location of their choice, provided that the specified conditions are being met.

Accordingly, we believe that Section 33 ought to be deleted from the Bill. As an alternative, general restrictions on cross-border transfers may be imposed only for "critical personal data". In this context, it is also important that the Bill should provide a definition of "critical personal data" or at least clarify the grounds on which personal data may be declared as such. This would help limit the otherwise extremely broad powers of the State in this respect.

Children's Data

Section 16 of the Bill contains an enhanced set of obligations for fiduciaries dealing with children's personal data and sensitive personal data. It requires fiduciaries to act in the best interests of the "child", defined to mean a person below 18 years. The provision mandates age verification and parental consent for the processing of such data, which, while well-intentioned, gives rise to some concerns.

For instance, a large part of India's internet using population comprises young people, including children. Requirements for age verification and parental consent may not be practical for a vast number of children who may not have access to relevant documents, may not receive parental support, or their parents may not be in a position to engage with the technology and verification system. Such a requirement is also likely to have a disproportionate impact on already vulnerable and marginalised communities, including adolescent girls. Section 16 also leads to a loss of agency for many young internet users, who are often the creators and consumers of online content for educational, recreational, entertainment and other purposes.

The procedure to conduct mandatory age verification is also beset with ambiguity, since any requirement to verify children's data will effectively amount to the verification of all users in order to be able to distinguish children from adults. This would clearly be a disproportionate invasion of privacy.

Finally, the Bill does not draw any distinction in the level of protection based on the age of the child, in effect treating children of 5 years and 17 years in the same manner. This, in essence, goes against the UN Convention on the Rights of the Child, to which India is a party. The Convention inter alia recognises that: (a) regulation of children should be in a manner "consistent with the evolving capacities of a child" and that children have a right to engage in play and recreational activities "appropriate to the age of the child" (Articles 5, 14 and 31); (b) children have a right to protection of the law against invasions of privacy and a right to peaceful assembly (Articles 16 and 15); and that (c) access to mass media, particularly from "a diversity of national and international sources" is important for a child's development (Article 17).

In order to allay these concerns, we recommend that the provisions pertaining to parental consent and age verification (Sections 16(2) and 16(3) of the Bill) should be deleted. In the event these provisions are retained, they should be amended to prevent the complete loss of agency for many young internet users; to enable a level of protection that is consistent with the age group of the child; and to ensure that the rights of all individuals to expression and access, including children, are not unduly restricted. Accordingly, Section 16 should lay down that the principle of best interests of the child and the requirement of consent from parents and guardians have to be interpreted "in a manner consistent with the evolving capacities of the child". Further, any requirement of age verification should be limited to guardian data fiduciaries to be classified by the DPA. Finally, the factors to be considered under Section 16(3) while deciding upon the manner of verification, should also include the impact of the verification mechanism on the privacy of other data principals.

Privacy by Design and Sandbox

Section 22(1) of the Bill requires every data fiduciary to prepare a privacy by design ("PBD") policy containing details of the processing practices followed by the fiduciary and the risk-mitigation measures put in place. According to Sections 22(2) and 22(3), the data fiduciary may submit the PBD policy to the proposed DPA for certification, which shall be granted upon satisfaction of the conditions mentioned in Section 22(1). The fiduciary and DPA shall then publish the certified PBD policy on their websites.

Section 22, as it is currently drafted, only requires data fiduciaries to prepare a PBD policy -- it does not require them to implement the same. Without a requirement to implement the PBD Policy, this would remain a mere paper requirement and serve no real privacy enhancing purpose. In contrast, Section 29 of the PDP Bill, 2018, required every data fiduciary to "implement policies and measures to ensure [privacy by design]". Similarly, Article 25 of the GDPR also requires data controllers to "implement appropriate technical and organisational measures" in order to meet the requirements of the regulation.

Further, given the range and scope of duties conferred on the DPA, requiring it to verify and certify every data fiduciary's PBD policy (as an ex-ante measure) could cast an unreasonable burden on the regulator. It must be noted that the scrutiny of a PBD policy will have to take into account each entity's specific business model, and the specific risk mitigation measures proposed to be implemented. This is clearly not an insignificant task. We therefore believe it would be prudent to permit independent data auditors to certify PBD policies, with further review of the certified policies by the DPA in cases where it is assessing the fiduciary's eligibility to participate in the sandbox under Section 40. This would reduce the burden on the DPA while enabling quicker turn-around times for business entities. The DPA could in turn regulate the process of certification by independent auditors through appropriate regulations.

Moving now to the issue of the regulatory "sandbox". This is a new concept in the data protection discourse in India although other sectors, such as finance, have already seen such developments. For instance, the Reserve Bank of India announced the creation of an enabling framework for a regulatory sandbox in 2019. We have also seen international examples that discuss such measures in the data protection context, such as in case of the UK's Information Commissioner's sandbox initiative.

Section 40 of the Bill permits the DPA to restrict the application of specific provisions of the Bill to entities that are engaged in developing innovative and emerging technologies in areas such as artificial intelligence and machine-learning. Presumably, the purpose is to enable companies to experiment with new business models without the fear of falling foul of the law (while at the same time enabling supervision by the authorities), in a controlled setting, where exposure to harm can be limited. According to Section 40, the DPA can modify the application of the provisions of the Bill relating to clear and specific purpose for data processing; collection only for a specific purpose; and limited period of data retention for eligible entities. In order to be eligible for the sandbox, an entity should have in place a PBD policy that has been certified by the DPA (Section 22).

The current draft vests significant discretion in the hands of the DPA in deciding which entities will be included or excluded from the sandbox. Despite this, there is no clear criteria provided in Section 40 that would allow the DPA to judge the entry of an entity into the sandbox. We believe that certain criteria, based on the expected level of innovation, public interest, and viability, should be specified in Section 40 itself, to improve transparency and accountability. The provision of specific criteria needs to be accompanied by the requirement of a written, reasoned decision by the DPA, so as to reduce arbitrariness. Apart from this, the DPA should also be empowered to lay down conditions and safeguards for data fiduciaries to follow (with respect to personal data processed while in the sandbox) once they have exited the sandbox. Finally, changes flowing from the proposed revisions to the certification process of the PBD policy (discussed above) will also need to be made to Section 40.

Non-consensual Processing for Employment Purposes

Section 13 of the Bill gives significant leeway to employers for carrying out non-consensual processing of personal data, other than sensitive personal data, that is necessary in the context of employment. Given the inherent inequality in an employer-employee relationship, we believe that the Bill should have greater safeguards to prevent coercive collection or misuse of employees' personal data by employers.

For instance, the present draft of the provision permits non-consensual processing of personal data of an employee if considered necessary for "any other activity relating to the assessment of the performance" of the employee. This phrase is very wide in scope and can be easily misused by the employer, for instance through continuous monitoring and analysis of all activities of the employee, including the time spent in front of screen, private calls and messages, etc. Given the increasing relevance of remote working arrangements, this sort of monitoring could even be extended outside the office premises.

We have already referred to the significant imbalance of power in the relationship between the employee and employer. There can be many ways in which technology can further tilt the balance of power in favour of the employer. For instance, there has been considerable reporting on the "productivity firings" by Amazon. The company is said to be using "deeply automated tracking and termination processes" to gauge if employees are meeting (very stringent) productivity demands placed on them (Lecher, 2019). Similar stories of management or termination based on algorithmic decision-making are increasingly being heard from many other sectors of the economy. When one considers the advances being made in tracking and privatised surveillance systems, the ability of employers to collect and analyse data of their employees without their consent, can become extremely problematic.

Accordingly, we believe the broad exemption provided for employers should be done away with by deleting this provision. However, if the provision is to be retained, we recommend that two amendments need to be made to it. First, the provision should only permit non-consensual processing as is "reasonably expected" by the data principal. Second, any processing under this provision should be proportionate to the interests being achieved.

Exemption for Research, Archiving, or Statistical Purposes

Section 38 permits the DPA to exclude the application of all parts of the law to processing of personal data that is necessary for research, archiving or statistical purposes, if it satisfies certain prescribed criteria. As highlighted in our earlier submissions, the framing adopted by the provision is very broad as it extends the exemption to research and archiving conducted for a wide variety of purposes, including situations where this may not be appropriate. This includes research that is predominantly commercial in nature. Market research companies carrying out consumer surveys, focus groups discussions, etc., often use intrusive means of data collection and are repositories of large quantities of personal data. We believe that such purposes should not be exempted from the purview of data protection requirements as doing so would significantly lessen the privacy protections offered to individuals, without any significant public benefit being achieved.

Accordingly, we recommend narrowing the scope of the provision only to the processing of personal data where the purpose is not solely commercial in nature and the activity is being conducted in public interest. Notably, the GDPR also limits exemptions granted to research purposes to "archiving purposes in public interest, scientific or historical research or statistical purposes"(Article 89). Further, a somewhat similar approach has been adopted in the Copyright Act, 1957, which in Section 32 provides for the issuance of licenses to produce translations of works, inter alia, for research purposes. Section 32 specifically excludes "industrial research" and "research by bodies corporate" (not being governmental controlled bodies) "for commercial purposes" from the scope of the law -- thus, the exemptions from copyright protection under the law do not apply to the use of copyrighted material for such categories of research.

In addition, it is unclear why provisions pertaining to transparency, fair and reasonable processing, deployment of security safeguards etc. are not made applicable to entities that may avail the exemption under Section 38, as was suggested in the earlier draft of the PDP Bill, 2018. As mentioned above, commercial research companies collect, process and store large quantities of personal data, thereby making them susceptible to significant breach of privacy (in the case of data breaches, unauthorised disclosures, etc). Therefore we suggest that Section 38 should be revised to ensure that the provisions of the law are only exempted to the extent they may significantly impair or prevent achieving the relevant purposes. Notably, the UK Data Protection Act, 2018, also follows a similar approach in Schedule 2 (Part 6, paragraph 27 and 28).

Non-personal Data

Section 91(2) is a new provision that has been introduced in the latest version of the Bill. Under this section, the Central Government may, in consultation with the DPA, direct any data fiduciary or processor to provide any non-personal or personal data that is in an anonymised form. The Government is required to lay down regulations governing this process. This non-personal data is to be used for "better targeting of delivery of services or formulation of evidence-based policies" by the Government.

We find that this provision is misplaced in the Bill and is disproportionate in nature, for the following reasons. First, regulating non-personal data flows is outside the scope of the present law. Notably, the White Paper and Report of the Justice Srikrishna Committee exclusively consider the regulation of personal data, as do the Statement of Objects and Reasons and Recitals to the Bill.

Second, the Government has already constituted a Committee of Experts to examine regulatory issues arising in the context of non-personal data. The inclusion of this provision pre-empts the findings and recommendations of this Committee of Experts.

Third, the provision does not adequately consider and balance all relevant interests, as it provides the State with an omnibus power to call for any non-personal data. This could affect property rights of data fiduciaries, competition in the digital ecosystem (especially where the State is a market participant), and also affect individual privacy, particularly in situations where unrelated data sets available with the Government could be processed to reveal personally identifiable data. There is significant literature on the possibility of anonymised data sets being re-identified through advanced computing, or on being combined or added to new information to reveal personal data.

Fourth, calling for data on grounds that it may be used for "evidence based policy making" is vague, ambiguous and susceptible to arbitrary use. Existing provisions of law allow sectoral regulators and Government agencies to collect relevant data (personal or non-personal) where required for making regulatory or policy interventions. The provision would therefore fail the Puttaswamy tests of ensuring proportionality and being subject to appropriate procedural safeguards.

In the circumstances, we believe the provision must be dropped from the Bill.

Conclusion

In this post, we have highlighted how the Bill offers limited privacy protections for individuals in various contexts, such as when it comes to an employee-employer relationship or in the context of processing of personal data by entities engaged in commercial research and statistical work. At the same time, certain provisions, while they may seem well intentioned, require significant fine-tuning so as to not unduly limit individual rights, such as the requirement for verification of users' age.

We show that by failing to ensure that data fiduciaries must implement a PBD policy, the Bill merely envisages a paper requirement, while at the same time casting a significant burden on the DPA to certify such policies. Similarly, the provision on data sandboxes, while in theory may not be a bad idea, also requires much more discussion and work. To begin with, we propose that the provision needs modifications to limit the discretionary power available to the DPA, particularly in terms of selection of entities to take part in the sandbox. Finally, we also explain why the provisions pertaining to data localisation and non-personal data are poorly conceptualised and disproportionate in nature.

Based on the discussions here and in our previous post on the Bill, we conclude that there are a number of areas where the Bill needs further work before it can be said to be providing an appropriate standard of data protection. Further, the introduction of various completely "new" provisions in the Bill at this stage, such as those pertaining to non-personal data, sandboxes, social media intermediaries, and consent managers is less than ideal given the significant public discussion carried out on the draft law over a two year period. In this context, the fact that the Joint Parliamentary Committee that is currently examining the Bill has called for, and is considering, public comments is a positive step.

References

Bailey and Parsheera, 2018: Rishab Bailey and Smriti Parsheera, Data Localisation in India: Questioning the Means and Ends, NIPFP Working Paper No. 242, October 2018.

Basu et al., 2019: Arindrajit Basu, Elonnai Hickok and Aditya Singh Chawla, The Localisation Gambit: Unpacking Policy Measures for Sovereign Control of Data in India, The Centre for Internet and Society, 19 March, 2019.

Bhandari et al, 2017: Vrinda Bhandari, Amba Kak, Smriti Parsheera and Faiza Rahman, An analysis of Puttaswamy: the Supreme Court's privacy verdict, LEAP Blog, September 20, 2017.

Justice K.S. Puttaswamy v. Union of India (Right to privacy case), 2017 (10) SCC 1.

Lecher, 2019: Colin Lecher, How Amazon automatically tracks and fires warehouse workers for 'productivity', The Verge, 25 April, 2019.

 

Rishab Bailey, Smriti Parsheera, and Faiza Rahman are researchers in the technology policy team at the National Institute of Public Finance Policy. Vrinda
Bhandari is a practicing advocate in Delhi. The authors would like to thank Renuka Sane and Trishee Goyal for inputs and valuable discussions.

No comments:

Post a Comment

Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticism is perfectly okay; uncivilised language is not. We delete any comment which is spam, has personal attacks against anyone, or uses foul language. We delete any comment which does not contribute to the intellectual discussion about the blog article in question.

LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.