How many times have you heard your algorithm team say, “it’s working pretty well, we just need more data,”?  The trouble is that data is expensive to collect and there are practical limitations.  Contracted drone pilots charge upwards of $1,500 per day for data collection, if you’re lucky enough to find one with the right camera.  The window of opportunity for data collection is short for any use case, and even shorter on an individual field.  If you’ve got the right pilot with the right drone, now you need to connect them to the right fields.   

Sure, you’ve got your uncle, and your friend, and that other grower you’ve been working with for a long time. They’ll all let you fly their fields, but they’re not going to provide daily updates on the status of every candidate field, so you’re guessing when to go out there.   

Oops, too early, let’s try again next week. Rats, missed it on that field. Suddenly the season is over and you’re ending with only a handful of useful sets of drone imagery.  At least you got something, so you hand it over to the algorithm team.  You’re proud because they said they needed 5,000 images and you provided 6,000.  Good thing you asked the pilot to fly each of the fields with extra overlap, and three times a week. 

Two weeks go by and you get blindsided with the algorithm team lead’s status report:  

“We labeled the new data and retrained the model.  Accuracy is up slightly, but not as much as expected.  Our 6,000 new images were all of similar fields: same line of seed, same vigor, same soil type, same tillage practices, same weed control practices.  And it’s basically the same imagery we got last year. I don’t think we can launch this product nationally next year. We need more data diversity.” 

Shoot, you told your leadership team this was going to be ready to launch next year.  Do you launch anyway and assume your algorithm team lead is being overly cautious?  Do you hold off for a year?  But if you do, how will you get all the data you need?  You don’t have an uncle and a friend and a grower you’ve been working with for years in every state, or $1,500 a day to pay contract pilots to fly everywhere, or the time to manage pilots all over the country to do the flights.  How will you get imagery with conventional till, no-till, clay, sandy loam, 15” rows, 30” rows, twin rows, drought stress, nutrient stress, high weed pressure, and low weed pressure next year?  Wait, they do 36” and 40” rows in Texas.  Oh dear. 

Desperate to maintain schedule, the team tries data augmentations: tweaking the images you already have by rotating, scaling, cropping, and blurring to make them look different for model training.  It helped a little.  Next, they got approval to embark on a cut-and-paste synthetic data project.  They cropped and copied the thing you’re detecting out of your 6,000 images and pasted it into some more images.  That helped a little too, but the team seemed disappointed.  It turns out not all state-of-the-art techniques from self-driving cars transfer to agriculture. 

 

Now What? Queue Synthetic Data

Image-based insights for agriculture are on the rise everywhere: from drone-based stand counts to UAV and sprayer-based weed detection. This revolution is driven by machine learning, and machine learning is driven by massive amounts of data.   

However, the majority of image-based AI projects in ag get hung up on a lack of sufficient data.  They show promise in year one, going from nothing to 70% accuracy; there’s always a story about how it’s working, we just need a little more data to get to 95% accuracy.  But no matter the effort, it always seems there still isn’t enough data, they just need a little more.   

At Sentera we face this problem all the time.  All projects require a seemingly endless amount of data to achieve accuracy targets.  The challenge is exacerbated when customers shift requirements, which might change the resolution of the training imagery we need, the times of day we can collect, or the growth stages or planting patterns our models need to accommodate. 

Because it’s so expensive and time consuming to produce training data sets in the first place, not to mention to react to changes like this, we invested in a 3D synthetic imagery generation pipeline to supplement our real-world imagery.  Here’s how we used our synthetic imagery pipeline to support development of our Aerial WeedScout weed detection product: 

 

Basic Steps

First, we gathered 3D models of things you find in a field: crops, weeds, residue, rocks, and soil. We maintain a library of common crops, soils, and other features, but for Aerial WeedScout, we augmented our base models with more detailed models of certain weeds.  

Next, we assemble all of these items in an organized way into a 3D scene of a field, along with the sun as a light source and perhaps some clouds.   

After that, we simulate the camera’s location flying above the field and take a simulated picture of this 3D scene.  In addition to the simulated RGB picture, we also export information about which pixels are crop, soil, and weeds (and even the species).   

We run that process thousands of times with some randomization to produce different looking fields to generate a big training set.  Then, we feed the images and labels into training. 

 

The Benefits of Synthetic Data

There are two major benefits to synthetic data for machine learning model training: diversity and labeling.  With synthetic data we can generate thousands of different kinds of images without flying thousands of flights at thousands of fields for millions of dollars.  Everything about the 3D field model can be adjusted with parameters; there are parameters for: 

  • Crop size 
  • Tillage practice 
  • Soil type and moisture 
  • Weed pressure, species, and sizes  
  • Row spacing 
  • Seeding rate 
  • Emergence and vigor evenness 

Anyone with experience in image-based insights for agriculture knows all about lighting conditions.  It’s not just sunny or cloudy; there’s blue-sky sun, thin clouds, thick clouds, passing puffy clouds; there’s early morning imagery with low light, grainy images, and long shadows; high noon with very small shadows, evening imagery, and   also imagery taken on days with moisture in the air and a slight blur to everything, like taking a picture through fog.   

With our 3D synthetic imagery pipeline, we don’t just vary the field; we also vary the sky. Machine learning accuracy is all about data volume, and specifically – yet often forgotten when projects are scoped – the diversity of data.  Models perform best on imagery that is similar to the imagery used for training.  It’s no good to have 5,000 images when they’re all pretty similar and the first paying customer’s field is completely different.  Whereas a model with 5,000 images that are all quite different from each other is every ML scientist’s dream. 

The second key benefit is the generation of labels.  Usually, AI teams will select a subset of data collected for labeling.  They might even have some slick tools to identify the images that are most different from each other to make the best use of labeling efforts.  These images are typically outsourced to places where labor costs are low.  Though it’s not particularly expensive to label images, it does cost something, and there are internal costs to managing these labeling projects.     

The quality of labels must be checked by the team, followed by a few iterations of feedback and label quality improvement.  Eventually it gets good enough, but never perfect.  So, there’s always pressure to label just enough, and not overdo it.  With synthetic imagery, every image is labeled by software and the labels are perfect.  There are no quality checks, no iteration, no outsourcing project management, and there are no debates over which images to label.  Every image gets a label, and every label is perfect. 

 

A Word on 2D Models

Sentera dabbled briefly in 2D models, but between our own experience, the well documented challenges of 2D modeling for agriculture, and lack of success stories, we quickly determined we needed something better.  So much about imaging plants from a drone has to do with light, shadows, leaf translucency, and viewing angle, and you can’t recreate those elements with 2D cut-and-paste synthetic data. Going to a full 3D pipeline is well worth the extra effort. 

 

Sentera’s 3D Model Quality

Today, Sentera has photorealistic 3D models for multiple crops and many different species of weeds, with an emphasis on herbicide resistant species.  We’re adding new crops and weeds all the time, while also augmenting our soils, residue, and other models as needed. While there are commercially available 3D models for common row crops like corn and soybean, they are casual attempts at modeling just the late vegetative stage that might go nicely in a 3D video flyover of a farm for a cartoon.  We have put painstaking effort into generating models that are up the challenge of ML in ag tech.  

Our models span from emergence through maturity, with different structure and texture at each stage.  A new soybean plant isn’t just a small 6-leaf soybean plant, it’s a cotyledon.  You can’t just slap on a bunch of trifoliates to simulate growth, you have a pair of monofoliate leaves next.  Not all beans are happy and healthy, we have parameters to simulate drought stress resulting in long stems with small, folded, lighter green leaves – with corn having its own list of nuances.   

Not every plant in the field is the same: we model emergence variation ranging from skips to gaps to delayed emergence, resulting in plants of different growth stages.  A field isn’t just crop: we also simulate residue, rocks, soils type, soil texture, and soil moisture.  And lastly, at least for this post, the way a field looks in a drone image is heavily influenced by lighting conditions: we simulate mid-day sun, intermittent clouds, thin overcast, thick overcast, early morning low light with long shadows, and evening twilight.  It’s these lighting conditions coupled with the translucency of leaves that really separate Sentera’s 3D-enabled simulated imagery from 2D cut-and-paste synthetic imagery projects.   

Sentera’s simulation technology generates both drone-based imagery and ground-based imagery, such as for camera-enabled sprayer systems and individual row agricultural robots. Sentera’s innovative process of generating synthetic imagery of agricultural fields from 3D models for training machine learning models is patent pending. 

 

If the challenges of commercializing your R&D project sound a lot like what we described here, we are eager to work with you, contact us today.