Embedding a 1D-CNN Classifier on AudioMoth for Smart PAM
A novel Passive Acoustic Monitoring (PAM) system has been developed by researchers, integrating a 1D Convolutional Neural Network (1D-CNN) into an AudioMoth microcontroller for real-time soundscape evaluation. This model successfully identifies the calls of the endangered Scopoli Shearwater seabirds, reaching a classification accuracy of 91% (with a balanced accuracy of 89%) using a practical dataset. The optimization process reduces the memory usage by approximately 10kB to accommodate the AudioMoth's stringent resource limitations. This innovative method effectively tackles the challenges of power consumption and data storage that often limit the length of autonomous recording sessions.
Key facts
- Smart PAM system embeds classifier on AudioMoth microcontroller
- Uses 1D Convolutional Neural Network (1D-CNN) for raw audio classification
- Targets Scopoli Shearwater seabird calls (endangered species)
- 91% classification accuracy, 89% balanced accuracy
- Model optimized to ~10kB for AudioMoth constraints
- Addresses power and storage limitations in autonomous recording
Entities
—