simple speech recognition using neural networks

 

ever since my first home computer, i have been interested in speech recognition. imagine how fast you could whip out a term paper or quarterly report if you just dictated it to your pc. this could mean an end to "all nighters" and wrist cramps. my first successful system was a non-robust, speaker dependent system. in other words, the system would only recognize a few words spoken by the same person, or only a few people.

there are several products on the market today that provide some form of speech recognition. the quality of these products varies along with their price. i will attempt to demonstrate some of the building blocks that can be used in a simple speech recognition system or any other pattern recognition system. as an example, i will go through the steps to create a system that will recognize single word or short phrase commands spoken by an individual. the system discussed here is a speaker dependent, limited vocabulary speech recognition system. the system has four sections. data acquisition, preprocessing, pattern identification and postprocessing. most pattern identification systems are composed of these four components.

 

the first section, data acquisition, uses an ordinary pc sound card as an a/d (analog to digital) converter. the system samples 4096 points over approximately .5 seconds. the second block, preprocessing, performs a frequency analysis of the time samples from the first block. the spectrum is then compressed into 40 points. the third block, pattern identification, uses a trained artificial neural network (ann) to determine how closely the spoken word matches members of the ann vocabulary. finally, the last section contains some post processing to interpret and act on the output of the neural network.

the goal of the data acquisition block is to collect 4096 data samples over about .5 seconds. 4096 is the magic number of samples required by the preprocessing block to perform the spectral analysis. one half a second is about the length of most common words such as up, down, left and right. the bandwidth of human speech is usually between 50 the 4000hz. according the nyqust, you must sample at 8000hz, twice the frequency of interest, to get a digital representation of the data. the criteria is met by sampling at 8000hz for 1/2 second giving 4000 samples. to collect the speech samples, hardware must be considered. this function is performed by an a/d converter. fortunately, there is a very common and economical a/d converter available for the pc, the sound card. the sound card uses an a/d converter to collect input from the microphone jack.

accessing this function of the sound card is achieved using a sound card driver library. otherwise, without the driver library, the programming of the sound card at the a/d level could be tricky for someone inexperienced at this type of programming and not much fun for everyone else. another important hardware component of the data acquisition sub-system is the microphone. using a cheap cassette recorder microphone will work, but not was well as a slightly more expensive model. the final aspect of the data acquisition block to be considered is the trigger. many commercial speech recognition system use a keyboard combination to get the "attention" of the speech recognition system. this key combination tells the system to "listen up" for a command. a threshold is used to trigger the system to sample for the next .5 seconds. the threshold should be high enough to avoid triggering by background noise.

the second block of the speech recognition system is preprocessing. this section of the system must condition the data for the pattern recognition block. there are two steps in the preprocessing block. the first is to get the frequency spectrum of the samples provided by the data acquisition block. a fft (fast fourier transform) is used to provide a frequency spectrum of the speech sample. the fft will yield 4096 frequency data points. most of the delay before a decision is made by the speech recognition system is here in the preprocessing block. the fft algorithm is very time consuming. i am not sure why they call it a fast fourier transform. the second section of the preprocessing block will take 4096 frequency points and reduce them to 40 points of normalized frequency data.

the next block in the system, pattern recognition, consists of an artificial neural network. neural lab was used to create the neural network block that will recognize 5 spoken commands. more commands can be added, but his will increase the amount of training time and increase the number of examples needed. the training examples consist of sampled words passed through the preprocessor and an output pattern identifying the spoken word.



example of training pattern

{40 frequency points} {5 vocabulary points}

{ spectrum of word 1} {1 0 0 0 0}


good results can be obtained with as little as three training examples of each command. backpropagation with momentum and unipolar sigmoid activation are used to train the neural network. the neural network was configured with 40 input nodes (one for each output of the preprocessing section), 20 hidden nodes, and 5 output nodes (one for each word in the vocabulary). changing the number of hidden nodes may improve or degrade performance. training on a 386 should take no more than 30 minutes to get encouraging results. if the neural network does not appear to converge, more training samples are needed. a bad training pattern can slow down or stop the neural network from training properly.

the final block in the speech recognition system is post processing. here the output of the neural network is interpreted. the neural network will produce an output corresponding to each word in the vocabulary. the larger the value is, the more likely it is that the word was spoken. a simple sort will find the neural network output with the largest value. this output is the word that was spoken. this simple system was designed as a neural network demonstration. multiple users can be sampled and used in the command training set. this often requires more training time, and could require more hidden nodes in the neural network. this system was successfully used as a voice command system for a robot. the commands left, right, forward, back, and stop were spoken to control the robots movements. a long parallel port cable was connected between a 486 and the robot to relay the actions to the robot. commands had to be quickly repeated occasionally when the robot "misunderstood" the command. this system could also be ported to a micro controller and provide a robot with voice control.

there are many other techniques and tricks that can be used for speech recognition. this is just a simple example of how a neural network can be used to solve this very interesting problem. this also demonstrates how many practical neural based systems use preprocessing and postprocessing to make the most of the data available.

 

copyright 2009 - zagros robotics, all rights reserved - please send webpage comments or corrections to webmaster@zagrosrobotics.com - zagros robotics, po box 460342, st. louis, mo 63146-7342, (314)341-1836 info@zagrosrobotics.com for answers to any questions.  

home | what's new | lab notes | handy files | cool links | contact us | webmaster