Automatic speaker tracking in audio recordings
14 November 2013
A central topic in spoken-language-systems research is what is known as speaker diarisation, or computationally determining how many speakers feature in a recording and which of them speaks when. Speaker diarisation would be an essential function of any program that automatically annotated audio or video recordings.
To date, the best diarisation systems have used what is called supervised machine learning: they are trained on sample recordings that a human has indexed, indicating which speaker enters when. In the October issue of IEEE Transactions on Audio, Speech, and Language Processing, however, researchers from the Massachusetts Institute of Technology (MIT) describe a new speaker-diarisation system that achieves comparable results without supervision. No prior indexing is necessary.
Moreover, one of the MIT researchers’ innovations was a new, compact way to represent the differences between individual speakers’ voices, which could be of use in other spoken-language computational tasks.
“You can know something about the identity of a person from the sound of their voice, so this technology is keying in to that type of information,” said Jim Glass, a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and head of its Spoken Language Systems Group. “In fact, this technology could work in any language. It’s insensitive to that.”
To create a sonic portrait of a single speaker, Glass explained, a computer system will generally have to analyse more than 2,000 different acoustic features. Many of those may correspond to familiar consonants and vowels, but many may not. To characterise each of those features, the system might need about 60 variables, which describe properties such as the strength of the acoustic signal in different frequency bands.
The result is that for every second of a recording, a diarisation system would have to search a space with 120,000 dimensions, which would be prohibitively time-consuming. In prior work, Najim Dehak, a research scientist in the Spoken Language Systems Group and one of the new paper’s co-authors, had demonstrated a technique for reducing the number of variables required to describe the acoustic signature of a particular speaker, dubbed the i-vector.
To get a sense of how the technique works, imagine a graph that plotted, say, hours worked by an hourly worker against money earned. The graph would be a diagonal line in a two-dimensional space. Now imagine rotating the axes of the graph so that the x-axis is parallel to the line. All of a sudden, the y-axis becomes irrelevant: All the variation in the graph is captured by the x-axis alone.
Similarly, i-vectors find new axes for describing the information that characterises speech sounds in the 120,000-dimension space. The technique first finds the axis that captures most of the variation in the information, then the axis that captures the next-most variation, and so on. So the information added by each new axis steadily decreases.
Stephen Shum, a graduate student in MIT’s Department of Electrical Engineering and Computer Science and lead author on the new paper, found that a 100-variable i-vector — a 100-dimension approximation of the 120,000-dimension space — was an adequate starting point for a diarisation system.
Since i-vectors are intended to describe every possible combination of sounds that a speaker might emit over any span of time, and since a diarisation system needs to classify only the sounds on a single recording, Shum was able to use similar techniques to reduce the number of variables even further, to only three.
CLUSTER OF POINTS
For every second of sound in a recording, Shum thus ends up with a single point in a three-dimensional space. The next step is to identify the bounds of the clusters of points that correspond to the individual speakers. For that, Shum used an iterative process. The system begins with an artificially high estimate of the number of speakers — say, 15 — and finds a cluster of points that corresponds to each one.
Clusters that are very close to each other then coalesce to form new clusters, until the distances between them grow too large to be plausibly bridged. The process then repeats, beginning each time with the same number of clusters that it ended with on the previous iteration. Finally, it reaches a point at which it begins and ends with the same number of clusters, and the system associates each cluster with a single speaker.
“What was completely not obvious, what was surprising, was that this i-vector representation could be used on this very, very different scale, that you could use this method of extracting features on very, very short speech segments, perhaps one second long, corresponding to a speaker turn in a telephone conversation,” Glass added. “I think that was the significant contribution of Stephen’s work.”https://www.engineersjournal.ie/2013/11/14/automatic-speaker-tracking-in-audio-recordings/https://www.engineersjournal.ie/wp-content/uploads/2013/11/Audio-1024x1024.jpghttps://www.engineersjournal.ie/wp-content/uploads/2013/11/Audio-300x300.jpgTechMIT,research