Global Site
Displaying present location in the site.
NEC’s Commitment to Its New “NEC Group AI and Human Rights Principles” Policy
While Artificial Intelligence (AI) can enrich our lives, it may also lead to human rights issues such as the invasion of privacy and/or discrimination depending on how it is utilized. Anticipating and dealing with these issues is now a source of lively debate in government, academia, and business. At NEC, we have focused particular attention on these issues and in April 2019 finalized a set of principles — “NEC Group AI and Human Rights Principles” — which we introduce in this paper, outlining both the background and our commitment to putting these principles into practice.
1. Introduction
Rapid advances in AI technology that promise potential solutions for a variety of pressing social issues have prompted excitement in the public and private sectors alike. To take just one example, the application of AI to loan reviews could significantly reduce review time, while making it easier for people with unusual credit situations — such as the self-employed — who might have been denied a loan under a conventional review to get a loan. On the other hand, AI systems have also been discovered to suffer from problems of their own such as in-built bias. In a recent incident, the AI system used by a leading e-commerce company for assessing job applications was found to be giving lower evaluations to female applicants’ résumés for particular job positions.
We will look more closely at these issues in section 2. In section 3 we will review related laws and regulations, as well as guidelines adopted by government offices and academic societies, and voluntary efforts conducted by private corporations. Finally, in section 4, we will discuss NEC’s own approach as codified in “NEC Group AI and Human Rights Principles”.
2. Issues in AI
In the “Report on Artificial Intelligence and Human Society”1) put out in 2017 by the Cabinet Office’s Advisory Board on Artificial Intelligence and Human Society, potential issues with AI are examined from six perspectives — ethical, legal, economic, educational, social, and R&D. Ethical issues — which include the concept of human rights — are regarded as being the most critical. Ethics here means not a sense of morality but rather a sense of value — fundamental values established slowly and with great effort through the course of human history, such as preservation of humanity, protection of privacy, and assurance of equality and safety2). To give one example, Professor Tatsuhiko Yamamoto of Keio University Law School argues that letting AI learn data sets that reflect traditional prejudice and bias will result in the inheritance of that bias in algorithmic content moderation. This can also create another problem called “virtual slums” in which the existence of bias is non-visualized due to black-boxed algorithms where unfair profiling caused by bias can result in lower estimated credit scores. Once established, these lower estimates are very difficult to overturn3).
Addressing these issues requires us to go beyond mere legal compliance. Instead, various factors — including privacy, human rights, and social acceptability — must be taken into consideration. Nor should such problem solving be limited to AI vendors actually engaged in the development and marketing of AI; rather, AI-related issues must be analyzed, understood, and responded to throughout the entire supply chain — including by individuals and organizations that provide AI-based services (Fig. 1).
![](/en/global/techrep/journal/g19/n01/images/190103_01.png)
3. Laws, Regulations, and Guidelines Governing AI and Efforts to Overcome AI-Related Issues
3.1 Laws and regulations
Legislation is rapidly being implemented worldwide with a view to protecting personal data and privacy. Introduced in May 2018, the EU’s General Data Protection Regulation (GDPR) stipulates the right to object profiling in Article 21(1) and the right not to be subject to a decision based solely on automated processing in Article 22(1). In the California Consumer Privacy Act (CCPA), which came into effect in January 2020, the definition of personal information also includes inferences drawn from any profiling. It also offers California consumers protection from discrimination in the form of reduced service or functionality for exercising the rights specified therein.
3.2 Government and academic society guidelines
Government offices, academic societies, and other organizations in various countries have announced guidelines for AI and ethics. These include the Ethics Guidelines for Trustworthy AI presented by the EU’s High-Level Expert Group on AI, OECD Principles on Artificial Intelligence, and the IEEE Ethically Aligned Design. Speaking of Japan specifically, the Cabinet Office introduced “Principles of Human-centric AI Society” in March 2019, and the Ministry of Internal Affairs and Communications released its “AI Utilization Guidelines” in August of the same year. These documents both discuss equality, transparency, individual dignity and autonomy, accountability, and privacy protection.
3.3 What businesses are doing
It has been pointed out that AI-related laws and regulations, as well as government and academic guidelines, are difficult to reference when deciding what degree of commitment is required in order to judge the lawfulness and appropriateness of specific products and services.
This has led many high-tech companies that possess AI technologies to formulate self-regulatory rules that businesses can follow. Starting with IT giants in the United States such as Microsoft and Google, this trend has now spread to Japanese companies (Fig. 2).
![](/en/global/techrep/journal/g19/n01/images/190103_02.png)
4. NEC’s Approach
4.1 “NEC Group AI and Human Rights Principles”
NEC owns numerous advanced technologies, the most representative which include a suite of AI technologies named NEC the WISE and a multimodal biometric authentication brand dubbed Bio-IDiom. To facilitate acceptance of these technologies by society, it is essential to take into consideration ethics and social acceptability. One type of AI which is regarded as fraught with problems is biometrics. Used for individual identification, biometrics — which includes such technologies as face recognition — has tremendous implications for human rights. Not only does it have the potential to infringe on privacy, it could also promote racial discrimination and have a chilling effect on freedom of expression and lead to mass surveillance among the general populace.
Against this background, we developed and announced in April 2019 the “NEC Group AI and Human Rights Principles” (hereinafter referred to as the Principles) to clarify our concept of social implementation of AI, as well as the utilization and application of personal data. Most importantly, the Principles declared that the NEC Group as a single entity would make a genuine effort to address these issues4). The Principles consist of seven main points (Fig. 3).
![](/en/global/techrep/journal/g19/n01/images/190103_03.png)
Human Rights Principles.”
The scope of the Principles is not restricted to deep learning AI. Rather, it is expressly stated that the Principles extend to biometrics such as face recognition and to the utilization of personal data. Above all, as the title makes clear, respect for human rights is of preeminent concern.
Moreover, responsibility to explain is stipulated as part of our efforts to achieve transparency. In other words, the NEC Group will do its utmost to ensure that the public is properly informed and understands how AI will be implemented and personal data will be used in society. For example, when pedestrians’ images are collected to analyze pedestrian flow, notifications are posted also in languages other than Japanese if the area has many foreign visitors. In addition to posting notifications at the locations where cameras are installed, we will also work to ensure that this information will be disclosed in fliers advertising public events in monitored areas (Fig. 4).
![](/en/global/techrep/journal/g19/n01/images/190103_04.png)
flier for a street event.
The Principles also specify the need for “proper utilization” in order that our customers and partners can use our technologies properly.
4.2 Inside the NEC Group
Excessive risk avoidance in areas where laws are not clear (legal gray areas) hinders innovation and causes business opportunities to be lost. With the establishment of the Principles, NEC has made respect for human rights our top priority. Only by doing so can we achieve harmony between convenience and safety. While framing guidelines suitable for different businesses based on the Principles, we are establishing a structure to review individual cases and where merited will seek the opinions of external experts. Already the legal and regulatory environment is mutating rapidly. San Francisco, for instance, has banned the use of face recognition technology by the city department. We make every effort to stay on top of current regulations and reflect them in our guidelines.
Maintaining respect for human rights in the deployment of AI is only one aspect of our approach. We also strive to incorporate respect for human rights into the earliest stages of product development — from initial R&D to planning and design. Raising the awareness of NEC Group employees is another key component of our commitment. External experts are regularly invited to give talks to our employees and we also hold individual seminars for our employees engaged in biometrics-related businesses. In addition, we have implemented online training programs aimed at NEC Group companies.
4.3 Outside the NEC Group
NEC Group is also working to expand our intra-corporate efforts to external domains.
As part of an industrial-academic collaboration, for example, we are conducting research in cooperation with the Keio University Global Research Institute (KGRI). As part of this effort, we are creating a human rights and privacy checklist for use cases provided by NEC. With face recognition, for example, the checklist includes items that prohibit camera deployment in the vicinity of facilities where people generally do not desire to be recorded for privacy reasons. Similar items are also incorporated into the in-house guidelines.
We also periodically hold dialogue sessions with multiple stakeholders (mid/long-term investors, sustainability management experts, professors, lawyers, NPO staff, consumers, etc.) to discuss our “materiality”, priority management themes from an environmental, social, and governance (ESG) perspectives. In the dialogue session held in April 2019, we received valuable suggestions for practical application of the Principles from outside experts (Fig. 5). In FY 2019, moreover, a Digital Trust Advisory Panel composed of outside experts on social implementation of AI and utilization of personal data was established. Many meaningful discussions have already been held by this group.
![](/en/global/techrep/journal/g19/n01/images/190103_05.png)
Additionally, NEC has identified “Privacy Policies and Measures Aligned with Societal Expectations” as one of the materiality. Our efforts in this regard were reported in “Sustainability Report 2019” 5).
5. Conclusion
Having established “NEC Group AI and Human Rights Principles,” we have explicitly declared that the concept of respect for human rights is our top priority when it comes to new technology. At the same time, we are promoting various policies through close communication with extra-company stakeholders. We at NEC believe that the promotion of utilization and application of ethical technology that centers around the concept of respect for human rights — exercised not only by us but also by our customers and partners — will contribute to the achievement of a digitally inclusive world where everyone can enjoy the benefits of digital technology free from anxiety.
- *Microsoft is a registered trademark of Microsoft Corp. in the U.S. and other countries.
- *Google is a trademark of Google LLC.
- *IBM is a trademark of International Business Corporation.
- *Fujitsu is a registered trademark of Fujitsu, Ltd.
- *J.Score is a registered trademark of J.Score Co., Ltd.
- *NTT Data is a registered trademark of Nippon Telegraph and Telephone Corp.
- *All other company and product names mentioned are trademarks and/or registered trademarks of their respective owners.
References
- 1)
- 2)H. Sanbe: Consideration of ‘AI & Ethics’ in Corporate Management, Oki Technical Review, Issue 233, Vol.86, No.1, May 2019
- 3)T. Yamamoto, ed. Artificial Intelligence and the Constitution of Japan, Tokyo: Nikkei Publishing, August 2018
- 4)
- 5)
Authors’ Profiles
Manager
Digital Trust Business Strategy Division
Manager
Digital Trust Business Strategy Division
Manager
Digital Trust Business Strategy Division