Please note that JavaScript and style sheet are used in this website,
Due to unadaptability of the style sheet with the browser used in your computer, pages may not look as original.
Even in such a case, however, the contents can be used safely.
Finally, I would like to provide a brief introduction to NEC R&D efforts in speech recognition over the years.
The history of speech technology at NEC dates back to the 1960s. In an era that saw the birth of NEC's first electronic computer (the NEAC-1101 in 1954) and a transistor-based electronic computer (the NEAC-2201 in 1958), NEC took the first step in speech recognition with the development of a prototype voice typewriter in collaboration with Kyoto University. Developed as a device dedicated to transistor-based speech recognition, this typewriter could only recognize one sound at a time in the manner of "ah" and "ii". This was groundbreaking technology for its time. Then, in 1978, NEC came out with the DP-100, the world's first continuous speech recognition equipment. This was NEC's first commercial product equipped with speech recognition technology, and it provided a system that could input data by speech for jobs that require both hands such as the sorting of packages or luggage.

Next, using this ability of recognizing continuously spontaneous speech, we began to expand the range of our research activities to speech translation in the 1980s. In 1983, we presented a demonstration of a Spanish/English automated interpretation system at Telecom83, a major telecommunications trade show for people in the global information-communications industry. Then, at Telecom91, we presented a demonstration of an automatic-interpretation experiment in which words spoken in Japanese or English by any user were to be translated into Japanese, English, French, and Spanish. Despite being implemented in equipment about the size of a refrigerator, the system presented here could only recognize a limited vocabulary of about 500 words, and the processing time from the input of utterances to the output of recognition results was as long as several seconds. Nevertheless, the demonstration created quite a stir among Telecom91 attendees as the world's first automated interpretation system that could recognize the voice of any person and translate the words spoken by that person into four languages.
In the 1990s, with the idea of enabling individual users to use our speech recognition technology, we set out to develop personal-use products. These included ULTALKER, a middleware product developed in 1995 for use with semiconductor chips, SmartVoice, a software product developed in 1999 that enables the user to read out a document in front of a personal computer and have that speech converted to text (called "large-vocabulary read-out recognition technology"), and Tabitsu, an automatic-interpretation software package first put on sale in 2001 for use on personal computers featuring a vocabulary of about 50,000 words. With advances in technology, each of these products has come to fulfill its mission, but in a little less than 10 years, high-speed processing has also become possible expanding the usage scenarios of these products.
More recently, we have been researching and developing technology for speech-based searching of a cell-phone's manual on the cell phone itself and automatic translation technology as I just described but for use on cell phones. Some of these technologies are now appearing in products such as the ones listed below, some of which have also appeared as examples elsewhere in this article.
Looking to the future, we plan to expand our R&D efforts toward an "International Prince Shotoku" system.