Global Site
Breadcrumb navigation
Is AI Taking Over Our World?
How to keep artificial intelligence in check
Science fiction blockbusters and series over the past decades have played an enormous role in painting a bleak picture of Artificial Intelligence (AI) in our minds. We’ve all seen robots become too intelligent and decide to rule the world, or crime departments using big data to predict offences and arrest the culprits before they commit the crime.
Just how much of these are merely wild fantasies, and how disastrous could our future become if AI is not regulated?
There has been considerable controversy over how social media companies have sold our biometric data to advertisers and how some governments are using AI to create a restrictive police state. It is, therefore, crucial for technological companies and governments to set stringent regulations on the ethical use of AI and protect the privacy and security of citizens.
NEC’s Future AI Research and Education Strategic Partnership Agreement
NEC has signed the Future AI Research and Education Strategic Partnership Agreement with the University of Tokyo to develop a framework for understanding the implications of AI in our everyday lives, and how the world could move forward safely with technology.
In March 2022, the joint program held its second symposium. It focused on the roles of different stakeholders in AI, with a call to action to developers, governments, companies, and users to have a shared responsibility for safe and ethical AI use.
Results from a survey about AI use in Japan were presented during this symposium. It revealed some rather interesting sentiments of users and developers:
- Once people are given clear information about an AI system, they are less wary and hesitant to use the technology.
- Less than 50% of the respondents are asking for legal regulations, possibly because there is a high level of trust in the public sector and law enforcement.
- The respondents, however, did not fully trust private companies to use AI ethically.
- People were not fearful of biometric information being taken, but were highly concerned about misrecognition and unintended use.
This implies that instead of imposing stringent rules on the technology itself, like what the EU is doing, regulatory bodies should be looking into more precise guidelines on how the technology is used. There should be transparency and assurance that private data is not misused or shared with unauthorised third parties.
The study also showed that different population segments had varied views and levels of acceptance toward AI. This is important for government agencies looking to launch public services with AI. The older, less tech-savvy citizens might find it difficult to adapt to new technologies, and some might even find themselves having no access to any online services if they do not have a smartphone or fail to navigate the registration process.
NEC Group AI and Human Rights Principles
As a global leader in the development of AI and 5G technologies, NEC has taken an important step in 2018 to establish a dedicated organisation – the Digital Trust Promotion Division – to create and promote a strategy based on Human Rights by Design. The organisation studies the impact of artificial intelligence (AI) on society and using biometric information on human rights and privacy.
Later in 2019, the “NEC Group AI and Human Rights Principles” was formulated as a declaration of NEC’s commitment to the ethical implementation and responsible use of AI, the privacy of users, and the elimination of algorithmic biases and prejudices (such as wrongly identifying certain racial groups as less creditworthy).
The Principles focused on seven main points: Fairness, Privacy, Transparency, Responsibility to explain, Proper utilization, AI and talent development, and Dialogue with multiple stakeholders.
The Principles extended beyond self-learning AI to biometrics such as face recognition and the utilization of personal data. The responsibility to explain the latest technologies to the public and how their data is being used, clearly and simply. For example, when NEC uses cameras to collect pedestrian images for traffic analysis, the company would put out public notices and flyers in different languages to let the public know what they are doing.
Internally, NEC incorporates respect for human rights into every stage of product development — from initial R&D to planning and deployment. This is an absolute priority for NEC because the corporation believes that future technologies should not only make lives easier, but safer too. External experts are regularly invited to give talks to NEC employees, and it also holds individual seminars for employees engaged in biometrics-related businesses.
Future-ready AI
Together with University of Tokyo Future Vision Research Center, NEC is working on practical training workshops to develop a new risk chain model for deploying AI. “New risk scenarios may arise after the AI service is actually put into operation,” admits a leading expert of AI ethics from the University of Tokyo.
“Using the risk chain model, we can propose risk control methods to customers as part of service management. I believe that we can establish a mechanism to grasp legal issues and knowledge and consider issues related to AI services in advance.”
As the guidelines continue to be drawn up with the ever-changing technology and its capabilities, the global population can soon expect a clearer and more comprehensive set of rules to help us move forward into the future of an AI-driven world.