Automate a data pipeline to ingest, archive, process, classify and visualize data on animals visiting trail cameras
Authenticate with Trailcam manufacturer, get S3 bucket authentication token (python)
List and download new photographs to a local folder (python, json)
Sort images into folders labelled by date (python, bash)
Archive images in personal AWS S3 bucket
Execute Machine Learning model against new images, classifying any animals on photos (PyTorch, YOLO, python, json)
Parse Machine Learning output, generate statistics (python, json)
Visualize the data in graphs (JavaScript, D3, HTML, json)
You can
Visit the live page at: https://thorburn.se/trailcam/
Check out the code at github.com/boaworm/TrailcamWeb
Look at visualization on the data using Looker Studio.
End result: An automated pipeline that ingest, archive, classify and visualize animal visits to trail cams. While the results are not that surprising (animals are nocturnal, deer visit mostly in mornings, evenings or at night - it is nice to have an automated pipeline and a growing data set to visualize further.
JavaScript, json data files and D3 turned out to be an excellent way of visualising the information, removing any need for serverside rendering, file caching and databases. The static website is fast, zero maintenance, and secure.