Search interesting materials

Monday, January 27, 2020

Announcements

Researchers in technology policy

The National Institute of Public Finance and Policy (NIPFP) is looking to hire two researchers interested in the technology policy field, on a full-time basis. In the Technology Policy group at NIPFP, we aspire to carry out cutting edge research and analysis, develop novel ideas and insights, and contribute to policy debates and development of public knowledge on relevant technology policy issues.

While we work on a broad array of issues in the overall landscape of technology policy, at present, our work focuses primarily on telecom policy, privacy, data governance, AI, open source, RegTech, distributed ledger and crypto-currencies, and competition policy.

Some examples of our work in this field include:

  1. Smriti Parsheera, Adoption and regulation of facial recognition technologies in India: why and why not?, Data Governance Network Working Paper 05, December 2019.
  2. Rishab Bailey and Trishee Goyal, Fiduciary relationships as a means to protect privacy: Examining the use of the fiduciary concept in the draft Personal Data Protection Bill, 2018, Data Governance Network Working Paper 04, December 2019.
  3. Vrinda Bhandari and Renuka Sane, A Critique of the Aadhar Legal Framework, The National Law School of India Review, Vol. 31, Issue 1, July 2019.
  4. Rishab Bailey and Smriti Parsheera, Data localisation in India: Questioning the means and ends, October 2018.
  5. Rishab Bailey, Smriti Parsheera, Faiza Rahman and Renuka Sane, Disclosures in privacy policies: does "notice and consent" work?, NIPFP Working Paper No. 246, December 2018.
  6. Smriti Parsheera, Challenges of Competition and Regulation in the Telecom Sector, Economic and Political Weekly, Vol. 53, Issue 38, 22 September 2018.
  7. Rishab Bailey, Vrinda Bhandari, Smriti Parsheera and Faiza Rahman, Use of personal data by intelligence and law enforcement agencies, leap blog, August 2018.
  8. Devendra Damle and Shubho Roy, Estimating the impact of the draft drone regulations, March 2018.
  9. Smriti Parsheera, CCI's order against Google: infant steps or a coming-of-age moment?, leap blog, February 2018.
  10. Suyash Rai, Dhiraj Muttreja, Sudipto Banerjee and Mayank Mishra The Economics of Releasing the V-band and and E-band Spectrum in India, December 2017.
  11. Ajay Shah, Predatory pricing and the telecom sector, April 2017 and Smriti Parsheera, Building blocks of Jio's predatory pricing analysis, April 2017 on leap blog.
  12. Comments on the (Draft) Personal Data Protection Bill, 2018, October 2018 and Submissions to the Justice Srikrishna Committee's White Paper on Data Protection, January 2018.

NIPFP is an exciting workplace where you will be surrounded by interesting people.

The Technology Policy group is an inter-disciplinary team of engineering, law, economics, technology and policy professionals.

The remuneration will be commensurate with the candidate's experience and will be comparable with what is found in other research institutions.

Requirements:

  • You must have a Masters degree, two years of work experience and very strong written and spoken English.
  • You must have a background in science/engineering/technology, public economics or public policy.
  • You must be self-motivated and eager to contribute to public policy debates in the area of technology policy.
  • Keen and demonstrated knowledge in areas relating to the frontiers of science and technology is a must.

Interested candidates may send in their CV, along with a writing sample of upto 10 pages, to: lepg-recruitment@nipfp.org.in

Wednesday, January 15, 2020

Announcements

Centre for Civil Society (CCS) is looking for an Associate Director (Research) based in New Delhi to lead research initiatives on structural reforms in education, regulatory barriers to livelihoods and entrepreneurship, and quality of governance.

About Centre for Civil Society

CCS is an independent think tank based in New Delhi studying structural public policy challenges in India and advocating market-based reform ideas to solve these problems. Our work in education, livelihood, and policy training promotes choice and accountability across private and public sectors. To translate policy into practice, we engage with policy and opinion leaders through research, pilot projects and advocacy.

Over the last many years, CCS’ research team has built a reputation for shining light on the dark recesses of governance: collecting and analysing data through primary surveys, ethnographic studies, public datasets and administrative data, and RTI applications, compiling legal and legislative analysis; testing the validity of innovative governance tools through pilot projects; and proposing a redesign of existing regulatory frameworks.

Examples of our research outputs

  1. In 2019 we studied the regulatory interface between government and private schools by analysing administrative data on licensing, fee regulation and inspections to understand the challenges of starting and operating private schools in northern India.
  2. In 2019, we developed a quality of regulation checklist and are assembling a dataset to measure the volume and complexity of regulation across India.
  3. In 2018 we examined the ground realities of the ease of doing business effort at the state level in Delhi producing 7 distinct papers using a combination of primary surveys, time and motion studies and investigative journalism.
  4. To test if the government was living up to its promise of including street vendors in urban planning and governance, we conducted a deep study of the implementation of the Street Vendors Act in 2017 and 2019 using case law analysis, analysis of state-level data and stakeholder interviews.
  5. We have conducted multiple pilot projects to test the design and application of vouchers for school and skill education, inclusive education using 25% RTE reserved seats, school quality improvement mechanisms etc.
  6. Over the years, we have produced over 150 research papers on the challenges facing micro, small and medium entrepreneurs and issues in education governance using observational studies and primary data collection.

Role profile

The Associate Director (Research) will lead data-driven research initiatives and deliver different types of research output. She will lead and manage high-quality teams oriented towards delivery and impact. She will partner with our advocacy team to effectively disseminate outputs from our research to elected representatives, policymakers, media and other important stakeholders. She will also write funding proposals and manage commitments to donors.

Researchers at CCS have sharp analytical and critical thinking abilities, and curiosity about the interplay between government, citizens, markets and civil society. An ideal team member is energetic, proactive, able to work independently and collaboratively as needed, and possesses both integrity and the humility to seek advice when needed.

Key responsibilities

  • Execute research agenda on education reforms and regulatory barriers to entrepreneurship:
    • Identify knowledge gaps and develop research questions/projects.
    • Ensure research is carried out within defined timelines across multiple projects including:
      • Directing research design and overseeing the implementation.
      • Quality assurance of data collection and analysis.
      • Building relationships with external service providers and researchers.
      • Troubleshooting.
    • Build a journal publication pipeline and ensure research output meets the rigour and relevance for publication in leading journals.
    • Write opinion pieces and commentaries on research undertaken and contemporary issues for newspapers, journals and other relevant publications.
    • Supervise managers/senior managers and associates on particular initiatives.
  • Represent CCS in external forums and present research outputs

Candidate qualifications


Experience and Education

  • At least an MA/MSc in Economics/Statistics/Political Science/Political Economy/Public Policy (PhD is a bonus).
  • 10-12 years of experience:
    • At least 5 years in a research role with a think tank or research institution.
    • At least 5 years of experience managing teams and leading projects.
    • Ability to develop and oversee project budgets, set priorities, plan timelines and meet deadlines, and hire and manage teams.
  • Proficiency and experience in at least three of the following research skills:
    • Running primary surveys (questionnaire design), response coding, and analysis of quantitative responses.
    • Working with datasets (eg. NSSO, CMIE’s household data or Government of India’s U-DISE raw data)
    • Working with government administrative data (IT system logs, inspection reports, grievance logs, etc)
    • Conducting case studies, using ethnographic research methods.
  • Proficiency in using R and LaTex (Python programming skills are a bonus)
  • Should have published reports, and op-eds and in recognised journal(s) on issues relating to structural reforms, governance, education, and/or doing business in India.

Skills

  • Analyzing complex qualitative and quantitative data, crafting possible solutions, and recommending actions.
  • Knowledge of a wide range of research methodologies and techniques, including qualitative and quantitative approaches.
  • Proficiency in high-quality succinct error-free writing in English.
  • Networking with leading researchers in India and around the world.
  • Communicating research content effectively, both orally and in writing, in English, to government partners, policy influencers, team members, researchers, partner organisations and donors.
  • Processing vast amounts of information and generating useful insights quickly and
  • Up to date knowledge of the Indian policy process and political landscape.

Application

Candidates who are qualified should apply using this link. Applicants must attach an updated CV, a cover letter, at least 2 publications (solo or co-authored), and 2 references. Selected candidates will be called for interviews. Qualified candidates should apply on this link.

Monday, January 13, 2020

Fiduciary relationships as a means to protect privacy: Examining the use of the fiduciary concept in the draft Personal Data Protection Bill, 2019

by Rishab Bailey and Trishee Goyal.

The Justice Srikrishna Committee Report of August 2018 (the "Report") introduces the concept of a fiduciary relationship into privacy jurisprudence in India by categorising data processing entities as "data fiduciaries" and individuals as "data principals". The draft Personal Data Protection Bill, 2019 (the "PDP Bill") attempts to operationalise the concept by establishing various rights of data principals and associated obligations on data fiduciaries.

The idea of using the fiduciary concept to protect an individual's privacy rights is not new - traditional fiduciary relationships such as that between a doctor and patient or a lawyer and client, do recognise duties of confidentiality. However, the PDP Bill is one of the first attempts to use the fiduciary framing as a basis for a generic data protection law.

In this context, in a recently released paper, we examine whether and how the fiduciary framing is suitable to develop a generic privacy framework and whether this provides individuals any protections over and above typically seen notice and consent regimes. In particular, we examine:

  1. Whether all data processing entities are in fiduciary relationships with individuals (and therefore whether the fiduciary concept works as the basis for a generic data protection law)?
  2. Whether, in theory, the use of the fiduciary concept can adequately protect an individual's privacy rights?
  3. Whether the obligations imposed by the PDP Bill are similar to the duties expected of traditional fiduciaries?
  4. Whether the fiduciary framing in the PDP Bill has any practical effect?

Before beginning to answer the above questions however, it is relevant to briefly explain what is meant by a `fiduciary relationship'. A fiduciary relationship is one characterised by a high degree of vulnerability between the parties, despite which the weaker party is required to impose trust or confidence in the other (Miller, 2014; Rotman, 2011). In order to protect the beneficiary in such a relationship, the law casts a series of onerous obligations on the more powerful party in the relationship, including importantly, the duties of loyalty and care (Frankel, 2011). This implies that the fiduciary is required to:

  • Ensure that it acts so as to protect or advance the interests of the beneficiary, or to act for the benefit of the beneficiary (Aditya Bandopadhyay, 2011; Jayantilal Mistry, 2015; Treesa Irish, 2010.) The fiduciary cannot normally put itself in a position that may be seen as conflicting with the interests of the beneficiary.
  • Put in reasonable skill (in accordance with general sectoral practice) while handling the affairs of the beneficiary.

The exact formulation of these duties varies depending on the nature of the relationship at hand and the vulnerabilities therein (though the beneficiary's interests must be placed before that of the fiduciary). For instance, trust law appears to cast more onerous duties and limits the fiduciary’s powers in more ways than company law. Often beneficiary’s in a trust relationship will not have any ability to comprehend risks or give proper consent (as may be the case where the beneficiary is a minor). The law therefore puts in place greater protections for a beneficiary of a trust vis-a-vis the trustee - for instance, Section 53 of the Trusts Act, 1882, permits the trustee to purchase the interest of a beneficiary only once a court is satisfied that the transaction is "manifestly to the advantage" of the beneficiary. In the company law context however the law generally recognises that if sufficient disclosures are made, shareholders and other stakeholders can act to protect the company against the erring director.

Are data processing entities in fiduciary relationships with their users?

Indian law requires a fairly high standard of vulnerability/power differential to be met before a relationship can qualify as "fiduciary". Courts have in fact held that relationships where there is no significant power differential are not fiduciary despite the exchange of confidential information therein. There is a catena of case law that holds for instance, that relationships of service provision are not fiduciary in nature, that situations where information is provided under a legal obligation are not covered under fiduciary relationships, that examination authorities are not fiduciaries qua students, that the chief justice is is not a fiduciary qua puisne judges of the Supreme Court, that banks are not fiduciary's to their clients and that the central bank is not a fiduciary qua other banks (Canbank Financial Services, 2004; Aditya Bandopadhyay, 2011; Jayantilal Mistry, 2015; BPSC, 2012; Subhash Chandra Aggarwal, 2010; Naresh Trehan, 2014; Shri Rakesh Kumar Gupta, 2011).

This appears to indicate that the use of a fiduciary framing may not be suitable to cover the breadth of situations that a generic data protection law may need to cover. Many relationships of information exchange that would not qualify as fiduciary could nevertheless require some form of regulation (however light-touch) in order to protect individual autonomy and privacy. For example, the European General Data Protection Regulation (GDPR) applies in certain contexts to processing even by individuals.

On the other hand, Indian courts have in a few instances, allowed separation of fiduciary parts of a relationship from other parts thereof (Union of India, 2009; Canbank Financial Services, 2004). This could imply that a relationship not normally fiduciary in nature, could possibly be considered as such, only with respect to the transfer of information and the expectation of trust created thereby.

Overall, it appears that large data processing entities can certainly be in a position of power with respect to users by virtue of the information that users have to provide to them. Users do tend to expect their data to be used in certain limited ways, and in any event, not to disadvantage them or cause them harm (Punia, Kulkarni, Narayan, 2019). The power enjoyed by these entities can be unilaterally exercised so as to affect the rights and interests of the user (in the form of disclosure, acting on the basis of user profiling, etc.) and there is a social need for protection of user interests in such cases. The information asymmetry in such relationships, in addition to other issues such as the technical and structural concerns of the digital ecosystem, also make it difficult for users to either rely on contract, consumer protection or tort law, etc. to seek remedies. The information asymmetries problem in particular limits the abilities of users to act as autonomous and informed agents while contracting or indeed seeking remedies. The fiduciary concept could therefore prove useful in protecting user rights in the digital ecosystem.

Some, if not many relationships that involve processing of personal data, would not normally fall within the scope of the fiduciary concept. That said, statute could deem certain relationships as being akin to fiduciary relationships, and thereby bring within its scope all necessary actors in the digital ecosystem. Duties can then be imposed that are similar to those in a fiduciary relationship should this be felt necessary to solve a particular social problem. Whether such a relationship can continue to be called "fiduciary" however, is another issue.

Can the concept ensure a sufficient standard of rights protection?

While the breadth of the information fiduciary concept may be narrow in so far as its coverage of relevant entities is concerned, it does permit itself to expansion both in the scope of duties that could be made applicable to entities and indeed the scope of the data that forms the basis for the relationship.

The imposition of a high duty of loyalty and care for instance, could lead to a high standard of rights protection by ensuring that data processing entities can only use data for the benefit or to maximise the gains to the individual concerned. This could mean, for instance, that practices aimed at manipulating individuals based on profiles created using their personal data would no longer be permitted (Dobkins, 2018). Similarly, discriminatory practices that result from an analysis of personal data could also be prohibted (Dobkins, 2018). In fact, it is possible to argue that any monetisation of user data could be prohibited using this concept to the fullest (Balkin, 2016).

The use of the fiduciary concept could also mean that obligations will be imposed irrespective of contractual terms between the parties. The duty of care requirement in a fiduciary relationship could be interpreted to imply security and other related obligations on data processing entities.

Further, the fiduciary concept does not have to be restricted merely to protect "personal data" (i.e. data that relates to or identifies an individual) but can cover all types of data that are exchanged in a unequal relationship, with an attendant expectation of confidentiality (i.e. the data should not be publicly known information). Therefore, the concept could also be used to cast obligations qua the usage of non-personal data gleaned from a user, as well as non-personal data derived from personal data of a user. This has in fact been attempted in two draft American laws - New York's Privacy Act and the federal Data Care Act. Notably, these laws specifically deem data processing entities as fiduciaries thereby requiring them to place their user's interests ahead of their own, and avoid acting in a manner that could be considered unexpected or offensive to a reasonable user. In this context, it is interesting to note the draft PDP Bill does in fact attempt to cover even "inferred" data within its ambit - i.e. information that is gleaned from analysing personal information.

That said, there remain questions as to the efficacy of the concept in protecting privacy rights.

First, commentators have pointed to the dissonance in treating service providers as fiduciaries at all. This is on grounds that the business models of many digital service providers, being based on monetizing user data, can never be squared with the fiduciary concept, which involves the fiduciary placing its interests second to that of the beneficiary (Khan and Pozen, 2018).

While undoubtedly true that fiduciary law requires the interests of the beneficiary to be given precedence over that of the fiduciary, it is worth noting that fiduciary law does recognise multiple standards of the duty of loyalty - based on the asymmetry or vulnerability at hand, the nature of the relationship, the ability of the beneficiary to understand the risks involved, and so on. It does not therefore appear inconceivable for the concept to be made workable. Even implementing a 'best interest' or 'benefit' based framing of obligations may not necessarily lead to a complete bar on targeted advertising or monetisation of user data (though certainly some existing business models may need to be changed).

Second, it is argued that existing law (in the US) - whether in contract or consumer protection law - already requires companies to adhere to standards of fair dealing and good faith and restrains them from acting as con-men (Khan and Pozen, 2018). While existing law does indeed give consumers some remedies against privacy invasive practices, the standard of care and the range of rights/obligations in Indian contract and consumer protection law are significantly limited. Though Indian contract law does prevent fraudulent behaviour, it does not include an express "good faith" requirement as US law does. Indian law only requires insurance contracts to be entered into in "utmost good faith", which entails disclosure of all material facts (Makkar, 2018; Law Commission of India, 2006). Indian consumer protection law too only protects consumers from certain limited harms such as those defined as "unfair trade practices". The recognition of a fiduciary standard can therefore improve rights protection in India by raising the standards of care from that in existing law.

Third, the fiduciary concept applies to information provided in private settings and with an expectation of privacy at the time it is provided. The reliance of the concept on the expectations of users as a standard to gauge the validity of practices can be problematic. It has been argued for instance, that use of this concept lacks any independent normative standard and therefore does not adequately protect privacy rights (Crowther, 2012; Schneir, 2009). Balkin himself notes that the standard he proposes would require users to factor the monetisation of their data into account (Balkin, 2016). This may not be possible for all users.

Given that the concept only applies to data exchanged in private settings, an individual's privacy rights over data can end if voluntarily placed in the public domain at any point of time. However, data protection regimes such as the GDPR continue to recognise certain individual rights over personal data even once made public - for instance, by recognising a right to forget.

Fourth, concerns about the workability of the notice-consent framework as a means to overcome information asymmetry issues remain. As Khan and Pozen point out, the nature of information asymmetry in the digital ecosystem is of a significant order (Khan and Pozen, 2018). It could therefore be argued, that just as trust relationships often do not permit the beneficiary to consent to certain harmful acts (say where incompetent to contract, or where the risk of harm is significant as in the case of a beneficiary's interest being bought by the trustee) there is a need for higher standards of care to be imposed.

Are the obligations under the PDP Bill "fiduciary obligations"?

At the outset, its important to note that the nature and scope of obligations contained in the PDP Bill are indeed broadly similar to those imposed in the fiduciary relationships we have studied (trustee-beneficiary relationships, doctor-patient relationships and company-director relationships). The mechanisms used by the PDP Bill to address the agency problem can be summarised under five broad heads as below:

  • Limitations on the authority/ability of the data fiduciary to act without knowledge of the data principal: Provisions pertaining to purpose limitation, limitations on data collection and storage, informed consent as the primary ground for processing data, right to correct data, etc.
  • Duty of loyalty and care: Requirement for fair and reasonable processing, obligations to secure data and implement privacy by design measures, requirement to ensure obligations flow with the data, etc.
  • Reduction of information asymmetry: Provisions pertaining to notice, high standards of consent, right to access and correct data, transparency (record keeping and disclosure) and accountability related provisions such as requirement to provide various types of information pertaining to the processing to the data principal, conduct data audits, have a data trust score for certain entities, requirement of data breach notification, etc.
  • Standard of care: A reasonable and proportionate standard of care is required by the PDP Bill. Obligations are scaled based on the risks of any particular processing practice, as well as the type of personal data concerned and the nature of entities involved. Notably, greater obligations are imposed on significant data fiduciaries and guardian data fiduciaries.
  • Remedies: Data principals can approach the data fiduciary and then adjudicatory forums for breach of the duties cast on data fiduciaries by the law. Mere breach of the obligations under the law can lead to penal action. The penalties that the draft law imposes are fairly stringent.

However, it must be kept in mind that the PDP Bill imposes a low standard of loyalty. By requiring the data fiduciary to inform the data principal of relevant processing practices, by ensuring purpose limitation, and making it mandatory for processing to be fair and reasonable, the legislation appears to impose a "good faith" standard. Such a standard does not appear to be entirely inconsistent with the fiduciary concept, being similar to the interpretation of fiduciary duties in the context of Indian company law, though it does not offer the same level of protection as the duties cast on trustees.

There is no general requirement in the PDP Bill for the data fiduciary to act in the user's interests, for their benefit or to avoid acting in a manner detrimental to the user. This can be contrasted to the Indian law pertaining to directors, doctors and particularly trusts, which all contain provisions specifically limiting the ability of a fiduciary to act in their own interests or against that of the beneficiary. "Predictability" of processing - which is what the draft law aims at - is not synonymous with processing in the data principal's interests of for its benefit.

Though the Report repeatedly recognises the need for data fiduciaries to act in the "best interests" of the user, this standard is not explicitly included in the law with the general standard applied in the PDP Bill only requiring data fiduciaries to act in a bona fide, diligent and reasonable manner. Notably, the PDP Bill itself uses the phrase "best interest" only once - in the context of protection of children's data.

A lower standard is generally used where it is easier to overcome information asymmetry problems or where social norms otherwise dictate the need to do so (Langbein, 2005). Accordingly, the low standard used in the draft law can be traced to the Justice Srikrishna Committee aiming to balance business and individual interests. It is unclear if this is a sufficient standard of rights protection in the data protection context in view of the various consent related problems in the digital ecosystem and the vast information asymmetries present in a country like India (Punia, Kulkarni, Narayan, 2019; Bailey et all, 2018; Matthan, 2017).

On the other hand, by imposing such a standard, the law puts the onus on individuals to take charge of and actively seek to protect their privacy rights (as opposed to being viewed through paternalistic eyes). Further, the safeguard of the data protection authority being able to step in and prohibit/seek modification of any particularly problematic practice acts as a check on the most pernicious practices of large data processing entities. However, relying on the data protection authority to ban pernicious practices is not the same as requiring the data fiduciary to act in the interests of or for the benefit of the data principal. Empowering the authority in this manner appears to detract from the fiduciary concept in that it enables ex-ante decision making by an executive authority, rather than enabling practices to be adjudicated as being in consonance with (or in breach of) fiduciary obligations by an adjudicatory authority.

In traditional fiduciary relationships informed consent can be used to reduce/waive the obligations on the more powerful entity. However, the law also imposes various safeguards to prevent against abuse. These usually take the form of specific disclosures, and in cases where consent is deemed impossible or insufficient, as in the case with minors in the context of trusts, courts are permitted to step in and act in their interests. The draft law does not specifically circumscribe the ability of the individual to consent to activities that may not necessarily be in his or her interest. This is not per se against the fiduciary concept, though, both academics and courts appear to be hesitant about recognising the entirety of a fiduciary relationship to be voluntary/subject to contractual waivers (Leslie, 2005; Union of India, 2009).

Overall, it can be seen that the PDP Bill does indeed implement duties akin to that in traditional fiduciary relationships. The duties in the draft law do try and ensure that the data fiduciary processes data in accordance with expectations of the data principal / that the data principal is aware of the processing taking place and its effects i.e. that the agency problem in the relationship is reduced.

However, the scope of some of these duties and the standard set by them are not as high as seen in cases of traditional fiduciary relationships such as trusts. One may question whether the standard used in the draft law is appropriate in the privacy context, given the extent of vulnerability in many relationships of information exchange particularly in the digital ecosystem. The difficulty for individuals in comprehending privacy risks, even when complete disclosures are made, may in fact mean that a standard closer to that used in trustee-beneficiary relationships may have been more suitable (this is also the logic behind the concept of 'data trusts' which are increasingly being spoken of as an alternative model of data governance). Further, the nature of exemptions under the draft law, particularly in the context of processing by the State and by employers appears to go against the use of the fiduciary framing in the law. There is a significant power differential in citizen-state and employee-employer relationships, which can only be exacerbated by unchecked processing of personal data.

Why use the fiduciary concept? Does it have a practical effect?

The Srikrishna Committee chooses to utilise the fiduciary framing as the basis for the draft PDP Bill, 2018, in view of the perceived vulnerability of users to data processing entities and the apparent ability of the concept to balance rights protection with business interests. The concept is said to preserve autonomy of individuals while still enabling rights protection.

Given India's constitutional framework does not necessitate a fiduciary framing to avoid constitutional restrictions (as is apparently the case in the US - due to the high standard of constitutional protection for speech rights), it makes sense to use the fiduciary framing if the concept would allow novel data protection related obligations to be imposed.

However, a summary analysis of the draft PDP Bill, 2019, with the GDPR indicates that the two laws are largely similar in terms of the nature of obligations imposed (though the exact scope/contours of the obligations are different based on the specific language used in the laws). Both use largely notice and consent based models to protect user privacy (though this is enhanced and contains safeguards that are not normally present in contract law). Both regimes attempt to ensure individuals are informed of processing activities and that individuals are given control of their personal data not least through principles of purpose limitation, high standard of consent, detailed notice requirements, provisions aimed at reducing information asymmetry, etc.

In addition, it must be remembered that the use of the term "data fiduciary" in the draft law does not in itself imply that the high standards that come with fiduciary obligations are being imposed on all data processing entities. The definitions section in the PDP Bill is not a deeming provision (unlike the definitions provisions in the two US laws referenced previously). The entities that come within the definition in the law would be subject to the (fiduciary like) obligations provided in the PDP Bill itself but would not necessarily be required to adhere to the obligations or standards typically imposed on fiduciaries (for instance, under Section 88 of the Trusts Act).

The use of the phrase "data fiduciary" is largely meaningless from a purely legal perspective. What it does achieve is in terms of its symbolic and signalling value to courts, the general public and businesses.

One may speculate that this could be an important reason in choosing to use the fiduciary concept in the draft law. It is not impossible to imagine that the PDP Bill uses the fiduciary concept to cast the illusion of crafting a new, user-centric privacy framework, without actually changing too much from notice and consent based regimes. The fiduciary concept is something that is used in many legal contexts and is a term that people are familiar with (even if the nuances of this relationship are not very well understood). Doctors, guardians and other such fiduciaries are commonly expected to act in their beneficiary's interests / display a high standard of loyalty towards them. Use of the phrase "data fiduciary" may well lead people to assume or expect that the PDP Bill also imposes such a high standard of loyalty on data processing entities. Use of the terminology could therefore make the Bill more palatable to civil society which craves greater standards of rights protection, thereby making it easier to "sell" the legislation to the general public amongst other stakeholders. The motivation for using the fiduciary concept could also be the need to differentiate the PDP Bill from laws such as the GDPR, particularly in view of the Srikrishna Committee's self-imposed mandate to find a "fourth path" to data protection.

Conclusion

The use of the fiduciary concept to enable data protection is an interesting method used to justify regulation of privacy harming practices in the US constitutional scheme. The application of the fiduciary concept to the data protection context prima facie appears a feasible way to protect user rights due to the duties of care and loyalty expected of fiduciaries.

However, the concept also suffers from certain infirmities. Notably, all data processing entities may not be in fiduciary relationships with individuals. Due to the focus on balancing state, business and data protection interests, the PDP Bill does not confer as high a standard of loyalty and care as may be normally expected in a fiduciary relationship (and in this respect, departs from the discussion in the Report). Unlike the law in the case of doctors, company directors, and particularly trusts, there is no general requirement for fiduciaries to act in the beneficiary's interest or to their benefit (except in the context of children).

Data processing entities will be required to comply with standards of good faith and reasonableness that are akin to the "fair dealing" standards found in contract law in many jurisdictions. This standard is higher than that under current Indian contract and consumer protection law, but is similar to requirements in the insurance industry. Fiduciaries will have to make all material disclosures, and act in accordance with generally accepted industry standards. Practices such as targeted advertising, tracking, etc., will not per se be barred except where children are involved (or where the data protection authority believes that such practices are likely to harm individuals and therefore bars them). The powers granted to the data protection authority to bar certain practices, while possibly useful given the low standards of loyalty cast on fiduciaries, also implies that decisions regarding permitted practices will be made by executive authorities rather than adjudicatory authorities. These issues detract from the fiduciary character sought to be established by the draft law.

But the draft law does, to an extent, meet the aim of preserving autonomy i.e. decision making power of individuals, and reducing inequality in bargaining power. This is primarily done by subjecting data processing entities to strict consent related requirements including by specifying (high) standards for notice and ensuring that consent must be granular. The provisions related to information disclosure, limited data collection, deletion, purpose limitation, data audits and privacy impact assessments, etc., are also vital in reducing the agency problem in the relationship.

However, the same ends could be achieved without using the fiduciary concept at all - as is done in the case of the GDPR. One may speculate that the use of the terminology could be necessitated by the need to differentiate the PDP Bill from the GDPR, or to take an uncharitable view, to make it appear that the law contains a higher standard of rights protection than it actually does.

References

Frankel, 1983:Tamar Frankel, Fiduciary Law, California Law Review, Vol 71, Issue 3, 1983.

Rotman, 2011: Leonard I Rotman, Fiduciary Law's "Holy Grail": Reconciling theory and practice in fiduciary jurisprudence, Boston University Law Review, Vol. 91, Issue 3, 2011.

Sitkoff, 2011: Robert H Sitkoff, An Economic Theory of Fiduciary Law, in Philosophical Foundations of Fiduciary Law, Andrew S Gold and Paul B Miller (eds.), Oxford University Press, 2014.

Langbein, 2005: John H Langbein, Questioning the Trust Law Duty of Loyalty: Sole Interest or Best Interest?, Yale Law Journal, Vole. 114, Issue 1, 2005.

Aditya Bandopadhyay: Central Board of Secondary Education and Anr. v. Aditya Bandopadhyay and Ors., (2011) 8 SCC 497.

Jayantilal Mistry, 2015: Reserve Bank of India v. Jayantilal N Mistry, TC (C) 91/2015, Supreme Court, 2015.

Treesa Irish, 2010: Treesa Irish w/o Milton Lopez v. Central Information Commission and Ors., ILR 2010 (3) Ker 892.

Miller, 2014: Paul B Miller, The Fiduciary Relationship, in Philosophical Foundations of Fiduciary Law, Andrew S Gold and Paul B Miller (eds.), Oxford University Press, 2014.

Frankel, 2011: Tamar Frankel, Fiduciary Law, Oxford University Press, 2011.

Khan and Pozen, 2018: Lina Khan and David Pozen, A Skeptical View of Information Fiduciaries, Harvard Law Review, Vol 133, 2019.

Gellman and Adler-Bell, 2017: Barton Gellman and Sam Adler-Bell, The Disparate Impact of Surveillance, The Century Foundation, 2017.

Crowther, 2012: Brandon T Crowther, (Un)Reasonable expectation of digital privacy, BYU Law Review, Vol. 2012, Issue 1, 2012.

Schneir, 2009: Bruce Schneir, Its time to drop the `expectation of privacy' test, Wired, 2009.

Makkar, 2018: Angad Singh Makkar, Doctrine of Good Faith and Fair Dealing: Lacuna in Indian Contract Law, IndiaCorpLaw, 2018.

Law Commission of India, 2006: Law Commission of India, Unfair (procedural and substantive) terms in contract, 199th Report of the Law Commission of India, 2006.

Punia, Kulkarni, Narayan, 2019: Swati Punia, Amol Kulkarni and Sidharth Narayan, User's perspectives on privacy and data protection, CUTS International, 2019.

Canbank Financial Services, 2004: Canbank Financial Services Ltd. v. Custodian and Ors., (2004) 8 SCC 355.

BPSC, 2012: Bihar Public Service Commission vs. Saiyed Hussain Abbas Rizwi and Ors., (2012) 13 SCC 61.

Subhash Chandra Aggarwal, 2010: Secretary General, Supreme Court of India v. Subhash Chandra Agarwal, AIR 2012 Del. 159.

Naresh Trehan, 2014: Naresh Trehan vs. Rakesh Kumar Gupta, WP (C) No. 85/2010, Delhi High Court, 2014.

Shri Rakesh Kumar Gupta , 2011: Shri Rakesh Kumar Gupta vs. The Central Public Information Officer and The Appellate Authority, Director of Income Tax (Intelligence), CIC/DS/A/2011/001128, CIC, 2011.

Union of India, 2009: Union of India v. Central Information Commission, WP (C) No. 8396/2009, Delhi High Court, 2009.

Pullen, 2007: Berkeley Community Villages Ltd and Anr. v Pullen and Ors., [2007] EWHC 1330 (Ch).

Dobkin, 2018: Ariel Dobkin, Information Fiduciaries in Practice: Data Privacy and User Expectations, Berkeley Technology Law Journal, Vol. 33:1, 2018.

Balkin, 2014: Jack M Balkin, Information Fiduciaries in the Digital Age, Balkinization blog, 2014.

Balkin, 2016: Jack M Balkin, Information fiduciaries and the first amendment, UC Davis Law Review, Vol, 49, Issue 4, 2016.

Balkin and Zittrain, 2016: Jack M Balkin and Jonathan Zittrain, A Grand Bargain to Make Tech Companies Trustworthy, The Atlantic, 2016.

Leslie, 2005: Melanie Leslie, Trusting Trustees: Fiduciary Duties and the Limits of Default Rules, Georgetown Law Journal, Vol. 94, Issue 1, 2005.

Bailey et all, 2018: Rishab Bailey, Smriti Parsheera, Faiza Rahman and Renuka Sane, Disclosures in privacy policies: Does notice and consent work?, NIPFP Working Paper No. 246, 2018.

Matthan, 2017: Rahul Matthan, Beyond Consent: A New Paradigm for Data Protection, Takshashila Discussion Document, 2017.

Friday, January 10, 2020

A gap has opened up between the performance of listed and unlisted companies

by Ajay Shah.

The performance of listed firms has been poor


We start at an index of the net sales (i.e. the top line) of listed companies. This is computed at the quarterly frequency, using the quarterly disclosures which are mandatory for listed companies. As has been our standard procedure when thinking about macroeconomics, we exclude the financial firms and the oil firms. Our methods for constructing indexes (Dua et. al., 2013) are based on obtaining growth estimates from overlapping consecutive-quarter panels.

Figure 1: Index of net sales of listed non-finance non-oil firms (Nominal, seasonally adjusted)

The big fact that is visible here is that the index went up by 4x from 1999 to 2008, and after that it went up by 2x from 2008 till 2019. This doubling in the recent 11 years corresponds to an average nominal growth rate of 6.5%, which is pretty poor.

Figure 2: Index of operating profit of listed non-finance non-oil firms (Nominal, seasonally adjusted)

A similar picture is visible with the index of operating profit, for the same firms. The index went up by about 4x from 1999 to 2006, and has scored a doubling from 2006 to 2019, if we ignore the big decline in the Jul-Aug-Sep 2019 quarter owing to the unusual events for some telecom companies. This recent doubling in 13 years corresponds to an average nominal growth rate of 5.4%, which is also pretty poor.

In this article we wonder: While the listed firms have fared poorly, how different was the performance of unlisted firms?

Do we expect a significant difference between listed and unlisted firms?


Large firms such as Hyundai Motor, IBM India, L G Electronics India, Nokia Solutions & Networks India, Reliance Corporate IT Park, Toyota Kirloskar Motor, etc. are unlisted companies that are observed in the data. At first, we do not expect to see a significant difference between the overall average performance of listed and unlisted companies.

We expect that macroeconomic fluctuations impact upon both listed and unlisted companies, and that the overall growth of both groups should be roughly equal. The growth of all listed companies represents the performance of a diversified portfolio, as does the growth of all unlisted companies.

The traditional concept in India has been that after a firm reaches a certain level of maturity, an IPO takes place. The better and larger firms become listed companies. Listed companies have better access to capital, as the cost of capital goes down. So we have expected that a greater extent of investment may take place in listed companies. By this reasoning, we expect a somewhat higher growth rate for listed companies.

The performance of listed versus unlisted companies


Unlisted companies only have annual frequency disclosures, so we switch from quarterly results to the annual report. The CMIE firm database sees about 50,000 companies for at least one year. We construct indexes of the net sales of all the non-finance non-oil firms, both listed and unlisted. Our methods for constructing indexes are based on obtaining growth estimates from overlapping consecutive-year panels.

Figure 3: The index of net sales (nominal), log scale

The figure above shows the long time-series of the index of net sales. Two curves are shown: Listed companies and unlisted companies. The y axis is in log scale.

The two lines look the same till 2012. Until 2012, the diversification story worked: listed companies were a diversified portfolio, the unlisted companies were a diversified portfolio, but when the portfolio sales growth was calculated, the firm-specific or industry-specific fluctuations tended to cancel out and the overall performance was essentially the same.

But from 2012 on, the two groups have diverged substantially. The listed companies (the red line) have generated weak growth: about 50 per cent (nominal) in about 6 years. The unlisted companies have generated much stronger growth: about a doubling (nominal) in 6 years.

A similar difference is seen with the operating profit also.

Figure 4: The index of operating profit (nominal), log scale

Here also, we see remarkably similar performance between the two groups all the way till 2012. After that, the listed companies have delivered mediocre growth in performance: a total growth of 26% in 6 years. The blue line, for unlisted companies, has delivered total growth of 136% in 6 years.

The investment behaviour of the two groups has also diverged


We measure investment at the firm level using the percentage change, year on year, of the Net Fixed Assets.

Figure 5: Year-on-year growth of net fixed assets (nominal)

We see a striking phenomenon here. Listed companies invested at a bigger rate than unlisted companies in the early years, the red line and the blue lines becoming roughly equal from 1997 to 2009. After that, the blue line has always been above the red line.

There is an investment slump in the listed companies, but this is less the case with unlisted companies.

Is this just an artifact induced by the measurement process?


The CMIE database is now pretty large, it has about 50,000 firms observed for atleast one year. A large number of unlisted companies are observed. This was not the case in earlier years. An alternative hypothesis can, then, be proposed: Perhaps unlisted firms were always more dynamic, but they had a tiny presence in the CMIE database, and what has changed in recent years is not the dynamism of unlisted firms but the coverage of the CMIE database.

In order to examine this, we look at the magnitudes of listed and unlisted firms in the CMIE database.

Figure 6: Share in the count of firms observed by CMIE

Figure 7: Share in the balance sheet size of firms observed by CMIE

The figures above show the share of listed firms in the overall CMIE database. While it is true that the coverage of unlisted firms has improved greatly, it is not as if unlisted firms were absent in earlier years. Well before the recent shift, of greater dynamism by unlisted firms, a good chunk of the firms in the CMIE database were unlisted firms. The estimates for earlier years, for unlisted firms, are based on a strong dataset. This suggests that the phenomenon that we have uncovered is not merely an artifact of changes in measurement by CMIE.

Why might this be happening?


We have discovered that after 2012, fixed investment, revenue growth and operating profit growth are weaker for listed companies when compared with unlisted companies. Why might this be the case? We may conjecture that there are three explanations at work.

The gains in liquidity of shares from listing have declined. There was a time when listing at NSE and BSE gave a quantum leap in the liquidity of the stock, while the shares of unlisted companies were quite illiquid. In the last decade, however, the working of the exchanges has faced many difficulties, and alongside this there are greater opportunities to obtain liquidity through OTC transactions in shares. Through this, the gap between the liquidity of listed vs. unlisted has gone down.

Private equity has become a major source of capital. Private equity investors have become much more important in financing firms, and large ticket investments are now possible while the firm stays entirely private.

The burden of listing has gone up. The compliance burden imposed upon a firm by exchanges and by SEBI has become greater. The penalties meted out for non-compliance have become greater. The unpredictability in the behaviour of enforcement has become greater. For many a firm, staying unlisted is a way of avoiding the hazards of engaging with more state actors.

Implications


  1. When we see the remarkably weak operating performance of listed companies, we should be cautious before concluding that the overall performance of the Indian economy is weak. Listed companies are faring unusually poorly, and their performance constitutes an under-estimate of overall performance.
    In this sense, these findings undermine the claim of Dua et. al. 2013, which argued that you could construct a good output proxy for India by utilising the quarterly results of listed firms.
    As a consequence of these findings, in our business cycle measurement work (Pandey, Patnaik, Shah 2019), the only application of the net sales growth of listed companies is in identifying the macroeconomic measures which are leading vs. coincident. The performance of listed companies is not, in itself, utilised in constructing business cycle measures, as it has a downward bias in the post-2012 period.
  2. In response to the frictions imposed by the Indian state upon private persons, there is a desire to exit from business plans that induce interfacing with the Indian state. As an example, from 2007 onwards, a good deal of trading in Nifty and the Rupee -- the two largest financial products -- has left India, in response to weaknesses of financial regulation, capital controls and taxation in India.
    The phenomenon identified in this article may constitute an element of this exit: private persons are responding to the regulatory environment of SEBI and the exchanges by avoiding listing. This reflects the impact of the policy environment upon the gains from listing (what liquidity do we obtain on the exchange?) and the costs of listing (what additional burden of regulation and enforcement do we face as a consequence of being listed?).
  3. For investors, there is merit in looking beyond listed equities in order to obtain the tail wind of high growth of operating profit.

Bibliography


Dua et. al. 2013, A better output proxy for the Indian economy, The Leap Blog, 21 July 2013.

Radhika Pandey, Ila Patnaik and Ajay Shah, 2019, Measuring business cycle conditions in India, NIPFP Working Paper No. 269, May 2019.

Acknowledgments 


I thank Pramod Sinha, who implemented the ideas of this article as R programs.




The author is a researcher at the National Institute for Public Finance and Policy, New Delhi.

Friday, January 03, 2020

Facial recognition technologies in India: Why we should be concerned

by Smriti Parsheera.

All around us we are seeing a surge in the adoption of facial recognition technologies (FRTs) -- biometric systems that can be used to verify or identify a person based on their facial patterns. Examples of this range from National Crime Records Bureau's (NCRB) proposal to create a nation wide automated facial recognition system for law enforcement purposes to the Digi Yatra scheme that promotes the use of facial recognition at airports; from Facebook's auto tagging of photographs to Chaayos's use for receiving payments and recording reward points.

The inalienability of a person's face and the convenience with which it can be captured, make it an easy choice for satisfying the ever expanding demands of identifiability in the digital era. This is supplemented by the increased availability of digital images, videos and widespread use of closed circuit television (CCTV) systems, all of which become the fodder for the training and deployment of facial recognition systems. However, this is also the reason why the rapid adoption of FRTs, without any accompanying checks and balances, becomes worrying at many different levels.

In a recent Data Governance Network paper we discuss the growing use cases of FRTs in India and the legal and ethical concerns around it. These concerns include the lack of transparency around the use of FRTs; the threats to privacy and other civil liberties; problems of accuracy and effectiveness; and evidence of biased outcomes. While all of this holds true for the use of FRTs by the government as well as private entities, the imbalance of power between the citizen and the state and the likely consequences from its abuse make it particularly relevant to question the use of FRTs for law enforcement purposes.

Functions and use cases of FRTs

Most of the well known use cases of FRTs can be classified into four buckets based on their underlying functions.

The first function is that of identity verification -- checking if a person really is who they claim to be. For instance, in January, 2018, the Unique Identification Authority of India (UIDAI) had announced that it would allow the use of FRT as one of the modes of authentication under the Aadhaar Act. Through subsequent circulars the UIDAI had also mandated telecom service providers to start undertaking face authentication of their subscribers. While, following the Supreme Court's verdict in the Puttaswamy case, it is no longer possible for the government to mandate Aadhaar based face authentication by private entities like banks and telecom companies, the possibility of it being used by the government for distribution of welfare benefits remains very true.

The use of FRTs for purposes like voter identification, conducting know your customer (KYC) verifications and attendance in schools and offices are some of the other use cases that would fall under this head. For example, Delhi's Indian Institute of Technology has a home-grown solution called Timble that is used to mark student attendance. Proposals are also underway to roll out similar systems to mark the attendance of young school going students in Tamil Nadu's government schools and for all government teachers in the state of Gujarat.

The next function is that of access control, which basically builds on the identity verification function to assess whether a person is an authorised user of a particular space or service. Applications that pursue this function include biometric unlocking of mobile devices, entry into airports, homes or other premises and authorising withdrawals from ATM machines. For instance, in 2018, the Ministry of Civil Aviation launched the Digi Yatra project to create a facial biometrics based boarding system to be launched at various Indian airports. Testing under the project, which is currently voluntary, has already been going on at the Hyderabad, Bengaluru and Delhi airports. Similar systems have already been adopted at airports in many other parts of the world.

The third broad category, which also evokes the strongest concerns, is that of security and surveillance, including use of FRTs for law enforcement purposes. As per the AI Global Surveillance Index released by the Carnegie Endowment for International Peace, 85 percent of the countries that they studied (64 out of 75) were found to be using facial recognition systems for surveillance purposes (Feldstein, 2019). Examples of this include the Skynet and the Sharp Eyes projects in China, live facial recognition systems being tested by the London Metropolitan Police and NCRB's proposed National Automated Facial Recognition System (NAFRS).

As per the tender document released by NCRB in June, 2019, NAFRS is meant to be used for a range of purposes, including the identification of criminals, missing children and persons and unidentified dead bodies. The images that may be used for these purposes may come from the Crime and Criminal Tracking Network System (CCTNS), passport authorities, the Central Finger Print Bureau or the government's missing children tracking portal. The list also contains a sweeping category for "any other image database available with police / other entity". This seems to suggest that virtually each and every database in the country could potentially be linked to this system.

A clarification issued by the NCRB in response to a legal notice sent by the Internet Freedom Foundation (IFF) suggests that the scope of the project may be slightly narrower than what is indicated in the tender document (IFF, 2019). However, even if this were to be believed to be true, the design and scale of the project signal the clear likelihood of a gradual mission creep once such a system is put in place.

In addition to NCRB's proposed system, several state police departments are already deploying facial recognition systems. This includes reports about the use of FRTs by the Delhi Police, the Hyderabad police and under the Punjab Artificial Intelligence System.

Finally, FRTs also serve a number of commercial and business efficiency related functions. This includes photo tagging on social media apps, photo filter functions on chat apps and various uses in the retail and hospitality sectors. For instance, digital signage systems can predict a gazer's age and gender and accordingly display suitable advertisements and content for them. Facial detection and analysis also serves as the building block for other tools like emotion or sentiment analysis, which can offer useful applications in the marketing and entertainment sectors.

What are the main concerns?

Most of the use cases of FRTs, in India as well as globally, can be tied down to the pursuit of greater convenience (contactless payments and shorter queues at airports), efficiency (reduced airport staff), security (scanning crowds for "suspicious" persons), or accountability (checking for teacher absenteeism). While the technology could possibly help in achieving some of these objectives, this is often not established through rigorous and transparent testing. Moreover, the use of FRTs comes at a significant cost, which is not being accounted for by the developers and adopters of such systems.

The primary focus of most of the technical research on face recognition has been on improving the accuracy and efficiency of the technology. In other words, to minimise the false negatives and false positives. While both these metrics are useful indicators for evaluating the effectiveness of machine learning systems, their actual relevance has to be seen in light of the context in which such technologies are being deployed. For instance, false negatives in a system like Aadhaar would lead to the exclusion of legitimate beneficiaries while a false positive in the surveillance and law enforcement context can subject individuals to unwarranted investigation, embarrassment and harassment (Marda, 2019).

However, even if a facial recognition system were to achieve perfect accuracy, that would not make an obvious case for its adoption. This is because the use of FRTs has many other far reaching implications, from a legal, ethical and societal perspective, which need to be taken to account while determining whether and to what extent this technology should be deployed. Following are some of the main areas of concern.

Transparency -- In most situations there is a complete lack of information about when, or the specific purposes for which, FRTs are being deployed. Individuals affected by these systems also do not have access to meaningful information about the sources of training data that were used to develop the system, the sources of gallery images, the criteria for the selection of a particular vendor or technology partner, the accuracy rates of the system and the privacy and security protocols being followed. Transparency about these aspects is a necessary step for enabling independent testing and audits of facial recognition systems.

Information of this sort can become particularly necessary when facial analysis tools are being used to determine whether a person's face matches with someone who is suspected of committing an offence. Civil society groups in the United States are currently contesting a claim before the Florida Supreme Court in a case where a person was convicted for illegal sale of drugs based on the results of a facial recognition algorithm. The accused was the first among a list of probable matches identified by the algorithm with a "one star of confidence" that it had generated the correct match. The person was however not given access to the basis on which this determination was made or the details of the other individuals who were identified as potential matches.

Privacy and civil liberties -- The permanence of one's face and its intrinsic link with personal identity makes facial recognition a powerful tool for identification. The fact that in a large number of cases a person's face is exposed at all times or their images are available in various government and private databases makes it particularly difficult to exercise agency over the use of one's facial data. Some examples of privacy invasive uses of FRTs include its adoption by the Chinese Government for the profiling and tracking of Uighur Muslims and integration of FRTs in body worn cameras used by police forces in many parts of the world.

Widespread use of FRTs can also create a chilling effect on other rights, like the right to free movement, assembly and speech. Visuals of masked protesters in Hong Kong taking down smart lamp posts and surveillance cameras are symbolic of this tussle between the state's use of surveillance technologies and counter-measures being resorted to by protesters. As governments chose to crack down on such forms of resistance through "anti-mask initiatives" this not only affects the rights of the protesters but also those who may adopt facial coverings for various religious, cultural or practical reasons.

Concerns about the overreach of FRTs are however not just limited to autocratic regimes or even to government related uses. Private sector use of facial recognition also poses many significant threats to privacy and security. For instance, researchers have demonstrated how a person's face can easily be used as a personal identifier for pooling together information about them from multiple online sources -- like dating websites and social media portals (Acquisti, Gross, and Stutzman, 2014). Therefore, once a person's images are available online, whether voluntarily or as the result of someone else's actions, FRTs can make it almost impossible for the person to exercise the option of revealing their true identity in one context but remain anonymous in others.

The security of devices that rely on facial unlocking features can become another point of vulnerability for user privacy. The relevance of the differential facial security standards available on different smartphones was brought to light in a study where the researchers found that 26 of the 60 smartphones that they tested were vulnerable to a "photo hack" -- the device could be unlocked using the phone owner's photograph instead of the real person (Kulche, 2019). This illustrates how, given the user profile and characteristics of the Indian market, reliance on facial unlocking techniques on low-end devices could create increased vulnerabilities for consumers.

Accuracy and reliability -- It has been a well acknowledged problem in the field of facial recognition that the results of the system are only as good as the quality of the images that are being run through it. The results are therefore prone to errors on account of differences in the conditions of the images being compared, in terms of appearance, expression, age, lighting, camera angle, etc. This is particularly true in cases where the technology is applied in non-cooperative settings, for instance, using images gathered from a CCTV camera or for real-time biometric processing. For instance, a study on the live facial recognition system being tested by the London Metropolitan Police found that out of the 46 potential matches identified by the system only 8 matches could eventually be verified correctly, indicating a success rate of just about 19 percent (Fussey and Murray, 2019).

Having said that, it is also important to acknowledge that the technical capabilities of facial recognition systems have been improving over time. For instance, 3D facial recognition systems have already managed to overcome many of the technical issues faced by prevalent 2D systems. As per the National Institute of Standards and Technology, the "best performing algorithms" in its 2018 Face Recognition Vendor Testing Program showed significant improvements over the 2015 test results, offering "close to perfect recognition" (Grother, Ngan, and Hanaoka, 2019). Yet, there still remain significant variations in the results among different algorithms and developers, with recognition error rates in a particular scenario ranging from "a few tenths of one percent up to beyond fifty percent".

Bias and discrimination -- The training data being used for FRTs also plays a major role in determining the effectiveness of their outcomes. Buolamwin and Gebru, 2018 have demonstrated how the commercially available facial recognition tools offered by companies like Microsoft, IBM and Face++ showed much higher error rates for women with darker skin tones. This difference arose primarily on account of the under-representation of data belonging to this group in the training dataset. Similarly, a study done by the American Civil Liberties Union using Amazon Rekognition found that nearly 40 percent of the false face matches between members of the US Congress and a database of arrested persons were of people of colour although only about 20 percent of the Congress members actually belonged to this demographic group (Snow, 2018). While most of this research has emanated in the US context, it is easy to draw some parallels with the challenges that would arise in the deployment of similar systems in the context of India's multi-racial, multi-ethic set up.

Research of this nature is valuable in that it can nudge appropriate fixes to the training data and algorithms. However, it has also been rightly pointed out that ensuring better demographic representation in data sets does not do much to solve the larger issues of injustice in the institutional contexts within which facial recognition is being employed (Hoffmann, 2019). For instance, Keyes, 2018 challenges the very premise of deploying automated gender recognition systems, which tend to reflect the traditional models of gender as being binary, physiologically based, and immutable. This works to the specific detriment of transgendered persons, who may not fit into these traditionally defined gender constructs.

Limitations of the supporting ecosystem -- Another important factor, particularly in the Indian context, comes from the realities of the surrounding ecosystem within which technologies like FRTs are sought to be introduced. For instance, the mandatory use of FRTs for marking attendance in rural schools would have to account for real world factors like power outages, network down time, availability of devices and prevailing power structures in the local community.

While these issues go beyond the technical capabilities of FRTs, or even the legal and ethical implications around them, it would be dangerous to adopt such technological solutions without accounting for these realities. Similar concerns have also come up in the context of biometric authentication using Aadhaar, and would continue to remain relevant if facial recognition were to be deployed in this context.

FRTs under the draft PDP Bill

Given the variety of concerns being raised by the deployment of FRTs, it becomes particularly problematic that all of these applications are taking place in the absence of a robust data protection law in India. While the current Information Technology Act, 2000 and the rules under it do classify biometric data as "sensitive personal data" and afford certain protections to it, it is widely acknowledged that the scope and enforcement of the law remain grossly inadequate. Moreover, the obligations under the present law are applicable only to "body corporates", hence excluding most instances where government agencies interact with biometric facial data. It is also worrying to note that there has been no public consultation on the adoption of FRTs in any of the different contexts discussed here nor any systematic evaluation of the costs and benefits of using this technology.

The current draft of the PDP Bill that was recently introduced in the Lok Sabha seeks to take care of some of these concerns by bringing the State along with other private actors who deal with the personal data of individuals within the scope of the proposed law, labelling them as "data fiduciaries". The bill requires that the "explicit consent" of the individual is required for any processing of sensitive personal information, including biometric data. However, it also allows for such processing to take place under other grounds such as an authorisation under law or a court order or judgment.

We have seen an example of such an order from the Delhi High Court which had in April, 2018 directed the Delhi Police to deploy FRTs for tracing missing children. This action reportedly resulted in the identification of close to 3,000 missing children by matching the images of missing children with a photo database of over 45,000 children living in various children's homes. While this was certainly a positive outcome, the episode also leaves us with several unanswered questions. For instance, what happens to the data of the children who were part of this exercise but whose data did not match with the missing children? Will their data be retained and used for other purposes? Could this include use for future investigation of criminal cases?

The other provisions of the draft Bill that are specifically applicable to biometric and sensitive data include a requirement of data protection impact assessment for large scale processing of biometric data by significant data fiduciaries and a requirement that a copy of all sensitive data needs to be localised on data servers in India. Further, the Bill also authorises the government to ban the use of certain forms of biometric data, except as permitted by law. However, there is no guidance on the actors against whom, and the circumstances in which, this power could be exercised.

While many parts of the current draft Bill retain the recommendations made by the Srikrishna Committee's draft that was submitted to the Government in July 2018, we see a sweeping departure from the Committee's recommendations when it comes to the processing of personal data for surveillance and law enforcement purposes.

The current draft of the Bill contains a fairly broad set of exemptions for the processing of personal data for the purposes of prevention or investigation of any offence or contravention of any law. Unlike the earlier version of the Bill, this exemption is not subject to the requirement of fair and reasonable processing of the data by the authorities. It also does not provide that such processing should be "necessary and proportionate" for achieving the intended purpose.

Another important safeguard that was suggested by the Srikrishna Committee was that any data processing involving the victim or a witness would ordinarily have to be done in accordance with the provisions of the law, including requirements like consent, purpose and use limitation, etc, unless this may prejudicially affect the case. By removing this requirement the current draft now offers a much broader canvas to law enforcement agencies. In addition to the exemption of certain types of processing, the PDP Bill also allows the government to completely exempt particular agencies from the applicability of the law on grounds such as security of the state, public order, etc.

To put these exemptions in context, suppose that an order under Section 144 of the Criminal Procedure Code, 1973 (CrPC) is imposed in a particular area directing individuals not to assemble in groups. Any person engaging in a peaceful protest could therefore find themselves acting in violation of the order and therefore the police may invoke the exemption under the PDP Bill to deploy facial recognition tools in order to identify the protestors. Given the wide scope of the facial recognition system being developed by the NCRB and the sweeping powers that are already available to the police to call for any "document or other thing" for investigation purposes, under Section 91 of the CrPC, the PDP Bill could effectively provide a free pass to the authorities to conduct mass deployment of FRTs on the protestors. This may include comparing the available images against the records gathered from a range of sources like CCTVs, student IDs, driving licenses, passport records, etc. This creates new barriers to the exercise of people's democratic right to protest.

In sum, the present draft of the PDP Bill offers wide ranging exemptions to law enforcement agencies, and can be regarded as effectively strengthening rather than checking the use of FRTs by the state.

Way forward

Facial biometric data is one of the most sensitive categories of personal data and therefore any adoption of this technology, either by state agencies or by the private sector, necessarily has to be preceded by the adoption of a robust data protection law. Assuming that a data protection law is brought about along the lines of the PDP Bill, it would determine the basic level of protection for the use of facial biometrics, including requirements relating to explicit consent, transparency obligations, purpose limitation and other usage restrictions.

However, the proposed data protection framework will not secure the degree of accountability that we need from the range of stakeholders participating in the implementation of FRTs. Firstly, a data protection law is not designed to compel the developers and vendors of facial recognition systems (as opposed to its users) to ensure transparency about their underlying models, training data being used, false positive and negative rates and other more granular information. Yet, information of this sort is necessary for there to be any independent checks and analysis on the accuracy, reliability and biases in the systems. We therefore need to look beyond data protection laws to find meaningful ways of ensuring transparency and public disclosure on the development and use of facial recognition systems.

Secondly, it must be noted that the PDP Bill only speaks to a few of the concerns posed by the use of FRTs, namely issues of data privacy and, to some extent, transparency. However, the broader privacy concerns posed by the technology, its accuracy limitations and biased outcomes still remain. Here it is useful to reiterate that with ongoing advances in technology, it is likely that many of the accuracy and reliability related concerns around FRTs might be overcome. However, satisfactory technical performance of such systems is only a necessary, but not sufficient, condition for their deployment. The use of FRTs has to be supported, in all cases, by a robust framework for gauging the suitability and proportionality of applying the technology in any given context and measuring the accompanying risks.

Finally, the wide ranging exemptions available to state agencies under the PDP Bill pose many specific concerns when it comes to the use of intrusive technologies like FRTs. In allowing for the sweeping application of FRTs for law enforcement purposes, the PDP Bill essentially condones the most pervasive and worrying use cases of FRTs. To be clear, such a use would still fall foul of the tests laid down by the Supreme Court in the Puttaswamy right to privacy decision. However, the language in the Bill lifts the statutory burden that should have been placed on law enforcement agencies to ensure proportionate application in each and every case and places the burden on petitioners to challenge the constitutionality of the application before a court of law.

References

Acquisti, Gross, and Stutzman, 2014: Alessandro Acquisti, Ralph Gross and Fred Stutzman, Face recognition and privacy in the age of augmented reality, Journal of Privacy and Confidentiality, 6(2), 2014.

Buolamwin and Gebru, 2018: Joy Buolamwin and Timnit Gebru, Gender shades: Intersectional accuracy disparities in commercial gender classification, Proceedings of Machine Learning Research, 81:1–15, 2018.

Feldstein, 2019: Steven Feldstien, The global expansion of AI surveillance, Carnegie Endowment for International Peace, 17 September, 2019.

Fussey and Murray, 2019: Pete Fussey and Daragh Murray, Independent report on the London Metropolitan Police Service’s trial of live facial recognition technology, The Human Rights, Big Data and Technology Project, July, 2019.

Grother, Ngan, and Hanaoka, 2018: Patrick Grother, Mei Ngan and Kayee Hanaoka, Ongoing face recognition vendor test (FRVT) Part 2: Identification, National Institute of Standards and Technology, November, 2018.

Hoffmann, 2019: Anna Lauren Hoffman, Where fairness fails: Data, algorithms, and the limits of anti discrimination discourse, Information, Communication & Society, 22(7), 2019.

IFF, 2019: Internet Freedom Foundation, NCRB finally responds to legal notice on facial recognition, we promptly send a rejoinder, 8 November, 2019.

Keyes 2019: Os Keyes, The misgendering machines: Trans/HCI implications of automatic gender recognition, Proceedings of the ACM on Human-Computer Interaction, November 2018.

Kulche, 2019: Peter Kulche, Facial recognition on smartphone is not always safe, Consumentenbond, 15 April 2019.

Marda, 2019: Vidushi Marda, Facial recognition is an invasive and inefficient tool, The Hindu, 22 July, 2019.

Snow, 2018: Jacob Snow, Amazon’s face recognition falsely matched 28 members of congress with mugshots, American Civil Liberties Union, 28 July, 2018.

 

The author is a Fellow at the National Institute of Public Finance and Policy, New Delhi. She would like to thank Ajay Shah, Ambuj Sagar, Apar Gupta, Christopher Slobogin, Elizabeth Coombs, Salil Tripathi, and an anonymous peer reviewer for valuable inputs and comments on the Data Governance Network paper titled Adoption and regulation of facial recognition technologies in India: Why and why not?, which forms the basis for this blog post.