A Research Study of the Development of a Computer Aided/Assisted Instruction (CAI) System for the Japanese Language Education Based on a Spoken Dialog Model
Masatake DANTSUJI* and Taizo UMEZAKI**
* Faculty of Letters, Kansai University
3-3-35 Yamate-cho, Suita-shi, Osaka 564, Japan
** Faculty of Engineering, Chubu University
1200 Matsumoto-cho, Kasugai-shi, Aichi 487, Japan
The aim of this study is to construct and develop a computer
aided/assisted instruction (CAI) system for the elementary course of
the Japanese language education. This system is developed for the
purpose of helping to reduce and to save labor of teachers. This
system is designed to make it possible to do repeated practices of
studying and repeated exercises of self-teachings by students based on
a spoken dialogue model. We carried forward our study by uniting a
knowledge of acoustic phonetics and that of speech processing and by
making use of the speech recognition technology.
It has been pointed out that the field of the Japanese language education has not utilized the advanced technology and knowledge of speech processing adequately so far forth. On the other hand, engineers of information science have seldom had opportunity to take part in the practical language education. Therefore, it is one of the purposes of this study that the researchers of the both fields combined their efforts to make progress in developing the computer aided/assisted instruction system for the language education.
In order to develop such a kind of computer aided/assisted instruction system, we have carried out fundamental research studies as follows: 1) Construction of purpose oriented tasks based on a spoken dialogue model. 2) Collecting, editing of the speech sounds based on spoken dialogue data and analyzing of them. 3) Revaluation of basic and fundamental vocabulary and lexical items which this system should contain in a built-in phonetic data-base. 4) Construction of a phonological lexicon/dictionary for the elementary course for the Japanese language education. 5) Construction of a man-machine interface which controls input devices such as speech sounds, mouse and keyboard.
One of the research items of this year is to construct purpose- oriented tasks. This study intends to realize spoken dialogues between men and machines. As one of the pilot studies, we simulate the situation of the daily life of foreign students in and around universities. It is necessary for the fundamental research of this kind of study to collect and analyze spoken dialogue arising in real situations of the daily life in and around universities. We recorded and analyzed the spoken dialogue with respect to university libraries. We collected materials of the spoken dialogue with respect to graduate theses, too. From the results of these experiments, it was shown that a small number of vocabulary was able to cover whole the dialogue efficiently in case that we set up purpose-oriented tasks.
We have examined articulatory properties of the Japanese pronunciation by means of making use of the procedures of the experimental phonetics. The results were processed into computer by making use of the acoustic phonetic analysis. We have extracted acoustic parameters such as formant structures from the input speech data of the language learners. We make it possible for learners to understand their faults of pronunciation audio visually by means of computer graphics such as vowel charts and the outputted model voice. On the computer monitor screen speech waves as well as spectrum envelope, fundamental frequency contours, etc. are displayed to help teachers to evaluate the pronunciation of learners.
In research study of the construction of a phonological lexicon, we have proposed a lexicon which includes multiple layers of description such as Japanese syllabary layer, phonemic layer, allophonic or phonetic layer and the layer of distinctive features. In relation to the construction of a phonological lexicon, we have pointed out a defect of the current system of the computer coding of the international phonetic alphabet (IPA). Instead of the current coding system, we have proposed a new computer coding system of the international phonetic alphabet based on phonetic category units such as phonation (voiced vs. voiceless), places of articulation (bilabial, labiodental, dental, alveolar, postalveolar, retroflex, palatal, velar, uvular, pharyngeal and glottal), and manners of articulation (plosive, nasal, trill, tap or flap, fricative, lateral fricative, approximant, lateral approximant, click, implosive and ejective).
In relation to the construction of a man-machine interface, we have developed a voice analytic display system by making use of graphical user interface (GUI) on X window System that is a standard window system with respect to a work station. This voice analytic display system is expected to show effective power at a pronunciation training process of this computer aided/assisted instruction system for the elementary course of the Japanese language education.
Keywords: CAI (Computer aided/assisted instruction), dialogue modeling, IPA (the International Phonetic Alphabet), GUI (graphical user interface), language education