Image Classification Guide


Welcome! This page provides access to Jupyter notebooks that demonstrates how to (1) implement cross-validation with YOLO for image classification in Python and (2) deploy a trained YOLO model, from Hugging Face for example, on new images πŸ“·πŸŒŠπŸ™πŸ€–

These are designed to be beginner-friendly, with the option to run them entirely online using Google Colab β€” no installation or GPU required. However, if you're more comfortable with Python and Jupyter, you can also clone the repository and run it locally.

No data is required, as the code also downloads an open-source dataset from Hugging Face. Note that this is a copy of the original dataset (for faster access), see Meyer et al., 2023 for details.

This resource is a supplement to our paper: Deep Blueprint: A Literature Review and Guide to Automated Image Classification for Ecologists (see below).

For R users: A shiny app (and associated R code) is also available here for select tasks.

Classification workflow
Figure 1: Simplified and idealized diagram of an image classification scenario. Each box represents a key task and corresponds to a section of the paper to aid comprehension. While presented largely linearly for clarity, real-world ML workflows are often iterative and non-linear and the need to revisit specific sections may vary depending on the scenario.

πŸ“š How to Cite This Work

If you find this code useful, please consider citing us :

Your Name, Another Author, and Third Author. β€œTitle of the Paper.” Journal Name, vol. XX, no. X, Year, pp. XX–XX. DOI: https://doi.org/your-doi.

Or use the following BibTeX entry:

@article{your2025paper,
  title     = {Title of the Paper},
  author    = {Your Name and Another Author and Third Author},
  journal   = {Journal Name},
  volume    = {XX},
  number    = {X},
  pages     = {XX--XX},
  year      = {2025},
  doi       = {10.xxxx/your-doi}
}