Working on Computer Vision tasks is always exciting for me. During my carrier I was working with many different types of images and was solving many different problems related to them in the fields of biology, medicine, genetics, climatology and many more. Today I would like to tell you about one of most extraordinary use cases I’ve ever worked on.

The problem

Computer Vision can be applied in many different fields, sky is the limit, but to be completely honest I would never guess that someday I would work on automatic measurement of the scrotal circumference of Norwegian Red bulls.

Everything started around year ago when my former student, now a PhD candidate at Inland Norway University of Applied Sciences, Joanna Bremer wrote me an e-mail with a simple question: Can we measure a scrotal circumference from the 3D images using deep learning? From this moment i was hooked!

I turns out that it is a major agricultural trend to automate measurements of different physiological and behavioral traits. Scrotal circumference is an essential part of the selection criteria for bulls in breeding programs. Traditionally circumference is measured manually with the use of scrotal tape. Automation of this process and implementation into feeding stations would be a valuable tool for performance testing stations and bovine semen collection centers and it would improve the safety and welfare of both technicians and animals.

Before we will show you our solution we should mention other members of our team. Beside me and Joanna there was also Elisabeth Kommisrud who is Joanna’s mentor and supervisor and Øyvind Nordbø.

We would also like to thank Hallstein Holen and his team at the Geno performance testing station: Jan Tore Rosingholm, Erik Skogli, Sigmund Høibakken and Stein Marius Brumoen for their time and indispensable help with data collection.

The solution

To crack this case we basically used 3 computer vision algorithms and some fancy mathematics. Our solution can be summarized in this steps:

  1. Semantic segmentation of the scrotum
  2. Connected–component labeling (CCL) algorithm for the segmentation artefacts
  3. Direct Linear Least Squares fitting of an ellipse to the predicted scrotum
  4. Approximation of the scrotal circumference

In the beginning we had nothing except for the images of the bulls taken using 3D camera, that shows how far is the object from the lens. We decided to create segmentation masks for the scrotum and train U-Net to predict the scrotum location.

Segmented scrotum

For most of the images, the predicted segmentation mask contained one solid object, which was expected and desirable. Some artefacts that created a second smaller object for the remaining images were found in the predicted segmentation mask. To solve this problem, we used a connected–component labeling (CCL) algorithm (also known as blob extraction or region labeling) to count the number of solid objects in a predicted segmentation mask. In the case of finding more than one object, only the object with the highest area would stay on the segmentation mask.

Multiple objects as a result of segmentation

After segmentation and cleaning faze, the Direct Linear Least Squares algorithm was used to fit an ellipse onto the boundary of a segmented mask.

Ellipse fitted to the predicted scrotum

In the final step we used Pade approximation combined with the distance and angle per pixel informations to calculate final scrotal circumference.

If you wanr to know more about the process, results and validation of our method check out our paper.