Word boundary detection is a necessary preprocessing step for automatic speech recognition, and several studies have shown that the accuracy of word boundary detection directly affects the accuracy of speech recognition.
Almost every study conducted here involves the collection of a large number of recorded sentences. The sentences are then segmented by marking the start and end of each word. This is done by trained individuals in the lab, and their work is compared to each others' to establish a measure of inter-labeler reliability. The recordings are typically high quality and low noise.
The code Detect_WordBoundary.m
generates the following relevant plots. Since the code demonstrates the proof-of-concept, just two wave sounds
and textgrid files
are included.
Please refer to the ResultsReport.pdf for details on the implementation.
getAllDuration.py
was written by William Furr, now working at Google.
The csv file duration (an example file is included) generated by getAllDuration.py
code over a speech corpora's textgrid samples contains the duration (d_w in seconds) of all words in the utterances. Since the words are likely to repeatedly occur across utterances, each d_w represents a sample from an underlying distribution p_w corresponding to w.
- With d_w information for all words, we can then approximate the underlying distribution p_w with an unimodal probability density function for each word, q_w.
- The word boundaries detected by
Detect_WordBoundary.m
can be adaptively scaled and thereby improved using q_w.