Join the Gut-Soil Health Movement

Poor irrigation scheduling is one of the biggest drivers of wasted water, and it can also cause environmental harm. Traditional “scientific” methods often fall short in the real world: evaporation scheduling relies on crop factors that don’t match field conditions, while soil moisture scheduling struggles because moisture is uneven and constantly changing through the soil profile. A better approach is adaptive scheduling—a closed-loop, self-learning method that mimics how skilled growers refine decisions over time, then turns that refinement into simple software guidance.


Why scheduling matters more than most people realise

Poor irrigation scheduling is widely recognised as a major cause of excess water use. Beyond cost and water waste, it can contribute to environmental damage through unnecessary deep drainage, nutrient leaching, and rising groundwater impacts. The frustrating reality is that many growers do not feel the available “scientific” methods give them the practical answers they need, so they default to experience, observation, and instinct.

The two “standard” scientific approaches—and why they disappoint

Over time, two dominant schools of thought have guided irrigation scheduling. The first is evaporation-based scheduling using crop factors, originally developed through laboratory experiments. The second is soil moisture monitoring, built on the promise that sensors can show when and how much to irrigate. In practice, neither method consistently solves the scheduling problem in a way growers trust or find dependable in everyday field conditions.

Limitations of evaporation scheduling in real paddocks

Evaporation scheduling assumes that if we know local evaporation and apply a suitable crop factor, we can estimate plant water use accurately. The problem is that crop factors measured under controlled laboratory conditions often bear little relationship to crop factors in the field. Real farms differ in ways that matter: microclimates, wind exposure, canopy structure, plant density, irrigation method, soil variability, and management practices all change water use. Even within the same crop and variety, a crop factor that works in one location can be misleading in another.

Limitations of soil moisture monitoring as a stand-alone solution

Soil moisture monitoring sounds definitive—measure moisture, then irrigate accordingly. But moisture sensors only measure a relatively small volume of soil. Water does not “magically” distribute itself to create a uniform moisture content. Moisture varies widely across the root zone and changes dynamically as plants extract water, usually drawing first from the upper zone and then progressively deeper as the soil dries. No irrigation system applies water perfectly uniformly, and even if it did, plant uptake is not uniform. The result is a complex, three-dimensional, constantly moving moisture pattern that cannot be monitored with an economically reasonable number of sensors.

The real-world response: growers rely on observation and judgement

Growers may not describe these limitations in technical terms, but they understand the practical outcome: neither method reliably answers the questions that matter day-to-day—how much to apply, when to apply it, and whether the last decision was right. Many growers may use evaporation data, soil moisture sensors, or both, but primarily as a guide. Their final decisions commonly depend on experience, plant observation, local knowledge, and continuous adjustment through the season.

A gap between science and practice—and why it persists

There is a well-known gap between the scientific community and the practical grower community. It is often argued that poor scheduling is mainly an education problem or a reluctance to adopt science-based methods. At the same time, there can be strong resistance to the idea that the traditional scientific approach itself is incomplete. The more constructive takeaway is that the gap signals a need for a different kind of technology: a system that is practical for growers, but still grounded in sound scientific principles.

What the “smart growers” are already doing (without calling it science)

If you look closely at how skilled irrigators operate, you can see a sophisticated process. They observe weather, plant condition, how much water they applied, and the crop’s response. Then they refine the next irrigation decision. This is a natural self-learning or adaptive approach—similar to how humans learn complex tasks like walking. It may not look like classic science, but it is a feedback-based control process that people are remarkably good at performing intuitively.

Adaptive scheduling: closed-loop feedback that copies good human judgement

Adaptive scheduling aims to emulate this natural learning process using closed-loop feedback technology—often described as self-learning software. The core idea is simple: instead of assuming crop factors are correct, the system continuously discovers a genuine “field crop factor” by observing outcomes and correcting itself. Once the true water consumption is known, scheduling becomes almost trivial because the key variable is no longer a guess.

How the predictor–corrector method works in practice

The method begins with an estimate of a field crop factor, typically based on what has been used historically at that site. This estimated factor is used in conventional evaporation scheduling software to predict how much water to apply. Soil moisture is then read before and after each irrigation. If the crop factor is correct, the soil moisture status from one irrigation to the next should remain consistent (allowing for expected short-term dynamics and the chosen measurement depth). If it is not correct, soil moisture will trend upward or downward over time, creating an error signal. That error is then used mathematically to adjust the crop factor for future irrigations. This is a predictor–corrector scheme: predict, measure the error, then correct the next prediction.

Why this approach is practical for growers

Although the mathematics can be sophisticated, it is largely invisible to the user. The grower sees the practical output: an adjusted, site-specific field crop factor that becomes more accurate over a number of irrigations. As the crop grows and conditions change through the season, the field crop factor is automatically corrected again. The system does not require growers to believe lab crop factors match their farm, and it does not require an unrealistic number of sensors to map a complex root zone. Instead, it uses feedback—what happened after irrigation—to steadily converge on what is true for that site.

What this means for water savings and crop outcomes

Adaptive scheduling targets the real cause of inconsistent scheduling outcomes: wrong assumptions about water use. By learning actual field consumption and correcting continuously, the system can reduce chronic over-watering and avoid avoidable stress from under-watering. In practical terms, it helps growers apply water with greater confidence, respond better to changing conditions, and move from “gut feel only” to “gut feel supported by feedback learning.” The objective is not to replace experienced judgement, but to give it a stronger and more reliable foundation.

Download “Adaptive Irrigation Scheduling: An Easy Way to Stop Wasting Water” (full PDF)

Loading

Leave a Reply