A team of researchers from Apple and Carnegie Mellon University’s Human-Computer Interaction Institute have presented a system for embedded AIs to learn by listening to noises in their environment without the need for up-front training data or without placing a huge burden on the user to supervise the learning process. The overarching goal is for smart devices to more easily build up contextual/situational awareness to increase their utility.

The system, which they’ve called Listen Learner, relies on acoustic activity recognition to enable a smart device, such as a microphone-equipped speaker, to interpret events taking place in its environment via a process of self-supervised learning with manual labelling done by one-shot user interactions — such as by the speaker asking a person ‘what was that sound?’, after it’s heard the noise enough time to classify in into a cluster.

A general pre-trained model can also be looped in to enable the system to make an initial guess on what an acoustic cluster might signify. So the user interaction could be less open-ended, with the system able to pose a question such as ‘was that a faucet?’ — requiring only a yes/no response from the human in the room.

Refinement questions could also be deployed to help the system figure out what the researchers dub “edge cases”, i.e. where sounds have been closely clustered yet might still signify a distinct event — say a door being closed vs a cupboard being closed. Over time, the system might be able to make an educated either/or guess and then present that to the user to confirm.

They’ve put together the below video demoing the concept in a kitchen environment.

In their paper presenting the research they point out that while smart devices are becoming more prevalent in homes and offices they tend to lack “contextual sensing capabilities” — with only “minimal understanding of what is happening around them”, which in turn limits “their potential to enable truly assistive computational experiences”.

And while acoustic activity recognition is not itself new, the researchers wanted to see if they could improve on existing deployments which either require a lot of manual user training to yield high accuracy; or use pre-trained general classifiers to work ‘out of the box’ but — since they lack data for a user’s specific environment — are prone to low accuracy.

Listen Learner is thus intended as a middle ground to increase utility (accuracy) without placing a high burden on the human to structure the data. The end-to-end system automatically generates acoustic event classifiers over time, with the team building a proof-of-concept prototype device to act like a smart speaker and pipe up to ask for human input. 

“The algorithm learns an ensemble model by iteratively clustering unknown samples, and then training classifiers on the resulting cluster assignments,” they explain in the paper. “This allows for a ‘one-shot’ interaction with the user to label portions of the ensemble model when they are activated.”

Audio events are segmented using an adaptive threshold that triggers when the microphone input level is 1.5 standard deviations higher than the mean of the past minute.

“We employ hysteresis techniques (i.e., for debouncing) to further smooth our thresholding scheme,” they add, further noting that: “While many environments have persistent and characteristic background sounds (e.g., HVAC), we ignore them (along with silence) for computational efficiency. Note that incoming samples were discarded if they were too similar to ambient noise, but silence within a segmented window is not removed.”

The CNN (convolutional neural network) audio model they’re using was initially trained on the YouTube-8M dataset  — augmented with a library of professional sound effects, per the paper.

“The choice of using deep neural network embeddings, which can be seen as learned low-dimensional representations of input data, is consistent with the manifold assumption (i.e., that high-dimensional data roughly lie on a low-dimensional manifold). By performing clustering and classification on this low-dimensional learned representation, our system is able to more easily discover and recognize novel sound classes,” they add.

The team used unsupervised clustering methods to infer the location of class boundaries from the low-dimensional learned representations — using a hierarchical agglomerative clustering (HAC) algorithm known as Ward’s method.

Their system evaluates “all possible groupings of data to find the best representation of classes”, given candidate clusters may overlap with one another.

“While our clustering algorithm separates data into clusters by minimizing the total within-cluster variance, we also seek to evaluate clusters based on their classifiability. Following the clustering stage, we use a unsupervised one-class support vector machine (SVM) algorithm that learns decision boundaries for novelty detection. For each candidate cluster, a one-class SVM is trained on a cluster’s data points, and its F1 score is computed with all samples in the data pool,” they add.

“Traditional clustering algorithms seek to describe input data by providing a cluster assignment, but this alone cannot be used to discriminate unseen samples. Thus, to facilitate our system’s inference capability, we construct an ensemble model using the one-class SVMs generated from the previous step. We adopt an iterative procedure for building our ensemble model by selecting the first classifier with an F1 score exceeding the threshold, 𝜃&'( and adding it to the ensemble. When a classifier is added, we run it on the data pool and mark samples that are recognized. We then restart the cluster-classify loop until either 1) all samples in the pool are marked or 2) a loop does not produce any more classifiers.”

Privacy preservation?

The paper touches on privacy concerns that arise from such a listening system — given how often the microphone would be switched on and processing environmental data, and because they note it may not always be possible to carry out all processing locally on the device.

“While our acoustic approach to activity recognition affords benefits such as improved classification accuracy and incremental learning capabilities, the capture and transmission of audio data, especially spoken content, should raise privacy concerns,” they write. “In an ideal implementation, all data would be retained on the sensing device (though significant compute would be required for local training). Alternatively, compute could occur in the cloud with user-anonymized labels of model classes stored locally.”

You can read the full paper here.

Source link

Author