Global Site
Breadcrumb navigation
Detecting the Risks of Misinformation Produced by Generative AI
Using NEC Hallucination Mitigation
Featured Technologies October 7, 2025

Hallucinations are becoming increasingly problematic as the use of generative AI large language models (LLMs) grows. Without expert knowledge, it is difficult for users to determine whether the output of an LLM is correct or not. This poses a significant barrier to using LLMs within the workplace where correct output is important. In addition, government regulations that dictate the use of AI, such as the European Union’s EU AI Act, are being considered and implemented around the world. This is increasing interest in the development of safe and secure AI methods and systems that are used appropriately.
To promote the use of safe and secure generative AI, NEC has developed a technology application that detects LLM hallucinations in real time. The application, which can be used with most common LLMs, helps provide a safer framework for using LLM output within science, industry and governance where content accuracy is important.
To learn more about this technology, we spoke with researchers from NEC Laboratories Europe who developed the technology and the NEC director in charge of promoting NEC generative AI business.
Detecting hallucinations in real time with high accuracy and promoting the use of generative AI in actual work

Manager & Chief Research Scientist
NEC Laboratories Europe
― Please tell us about the recently announced hallucination mitigation technology.
Lawrence: NEC hallucination detection is an application that detects hallucinations mixed into the output of generative AI and prompts its users to check the output. Generative AI can create a summary from data. However, if risks, including hallucinations, remain as they do today then generative AI cannot be effectively used in tasks where mistakes could lead to catastrophic consequences. This is because the users would have to check whether the entire text is correct or not on their own.
In response, NEC hallucination detection compares the generated text with the original text, identifies the sources of information and points out any areas that might contain hallucinations. It can detect hallucinations in real time at the sentence level , which enables the user to check only the relevant areas needed to efficiently reduce the risk of hallucinations. The application can also indicate the reason why it identified an area as an hallucination, providing explanations such as “It contains information that is not present in the original text."
Syed: In tests that we conducted, hallucination detection was able to reduce the checking time by roughly half when compared to cases where this function was not used. Moreover, we have verified that it significantly improved the accuracy of checking LLM output. For example, when users had to check the entire text on their own, they were only able to reliably verify the content with an accuracy of about 55% to 60%. However, when this function was used checking accuracy improved to between 85% and 90%.
Gashteovski: This functionality can be broadly used regardless of the industry or application. For example, it could be applied in medicine when creating an electronic medical record from interview information or when pre-checking prescribed medications for patients. Moreover, we have already started applying this technology within the banking industry. We believe that this function can also be used in the legal field and we are making preparations to do so.
Lawrence: For example, using NEC hallucination detection could significantly reduce the time taken to prepare legal documents with generative AI. By using hallucination detection, companies could compare draft legal documents created with generative AI with past contracts and internal company rules to point out any errors, significantly streamlining contract drafting work.

Director
AI Technology Services Business Division
NEC Digital Platform Services Business Unit
Togawa: The essence of this technology is that it compares sentences and analyzes the correlation between them. Therefore, it can also compare facts and rules while simultaneously checking for hallucinations. We believe that this technology will be particularly compatible with industries such as medicine, finance, law, etc., where mistakes can lead to serious consequences.
For example, there are strict legal guidelines regarding what can and cannot be said when promoting or selling insurance or financial products. Hallucination detection can be utilized to help companies within these industries comply with regulatory guidelines. In the AI Technology Services Business Division of NEC, we are developing AI solutions that meet the AI needs of our global clients. There is a strong demand within Japan and globally for generative AI solutions and we are working with NEC Laboratories Europe to develop the technology for these.
Developing NEC data enhancement technology to implement sentence-level detection

Senior Research Scientist
NEC Laboratories Europe
― What kind of technology did you use to develop this technology?
Gashteovski: The greatest advantages of this technology are its accuracy in detecting the provenance of the generated information at the sentence level and the speed at which it achieves real-time detection. To achieve this, NEC Laboratories Europe developed its own technology to generate the training data. By synthetically generating a large, high-quality training dataset, we significantly improved the performance of the technology. We generated the training data by leveraging the generative properties of the currently used LLMs, rather than using the straight-forward discriminative approaches for labeling data.
This technology also addresses the issue of limited data in different languages and, in principle, can support any language. Currently, it has been verified to operate with English and Japanese. This technology was summarized in the research paper “On Synthesizing Data for Context Attribution in Question Answering (Gorjan Radevski et al., 2025),” which was presented at the for natural language processing that took place in July and August of this year.
Lawrence: The AI model for the application was trained in a two-step process: First, we used a large-scale LLM to generate data. Second, we fine-tuned a smaller model with the generated data because smaller models are faster and less expensive to use. Our objective is a hallucination detection application that runs on small models with high accuracy that can be deployed on-premise at customer locations.
Lawrence: We have been researching natural language processing and explainable AI for over six years. There are very few organizations in the world that have been researching both topics for a long time. I believe that the lateral know-how, cultivated over years of research and development, has led to this technology
Togawa: I agree. You will find very few companies in the world that possess such a robust research division for enterprise (large companies and government) applications that have been working in the AI field for many years. We believe that ensuring explainability, safety and security will become an essential element, especially for enterprise applications. By combining the advanced technologies created by NEC laboratories with the know-how to use them, we hope to expand AI business so that we can provide the appropriate products and services to each customer, including companies and government offices.
Continuing development with a focus on automatic correction of hallucinations

Senior Research Engineer
NEC Laboratories Europe
― How do you plan to expand NEC hallucination detection technology going forward?
Syed: As the next step in the evolution of hallucination detection, we would like to display the level of hallucination risk. Currently, the interface indicates both low and high risks in the same way to the user. However, if we can display that information together with the risk level, this should greatly improve the efficiency of users verifying LLM output.
As a further evolutionary step, we would like to enable the NEC hallucination detection to not only detect the hallucination risk but also automatically correct it. We would like to create a function that can quickly make corrections in a short amount of time.
Lawrence: In addition, we also believe that this technology can be used to detect fake news, so we hope to support such applications in the future.

Togawa: The first hurdle that must be overcome to expand the business use of generative AI is guaranteeing reliable generative AI to users around the world. To achieve this, I believe that we need to thoroughly investigate how we can enable our customers to use generative AI in a safe and secure manner. By identifying customer pain points, we can continue developing not only this function but also other AI technologies and products using different approaches to promote the safe and secure use of generative AI in business.


This technology compares text output by generative AI large language models with the original data to indicate the risk of hallucinations to users. What makes NEC hallucination detection technology unique is that it not only shows source documents or websites but also identifies sources at sentence-level accuracy to present evidence of hallucinations in real time. The creation of this function uses data enhancement technology for hallucination training that was uniquely developed by NEC, and is explained in the paper “On Synthesizing Data for Context Attribution in Question Answering (Gorjan Radevski et al., 2025).”
- ※The information posted on this page is the information at the time of publication.