MDPI: Special Issue "Remote Sensing in Precision Agriculture"

Special Issue Information

Dear Colleagues,
Precision agriculture (PA)–defined as a set of technologies that combines acquisition, analysis, management, and delivery of information to help make site-specific decisions, with the ultimate goal of optimizing production–will play an important role in addressing this grand challenge. At the heart of the evolving tools, technologies, and information management strategies found in precision agriculture is remote sensing. However, the technology of capturing, analyzing, storing, and delivering the remotely sensed observations associated with precision agriculture is changing rapidly, thus making it difficult to keep up with the ever-expanding volume of scientific research. It is time to take stock of the current state-of-the-art in the remote sensing associated with precision agriculture.
A total of 25 papers are published, e.g.,
Tilly, N.; et al. Fusion of Plant Height and Vegetation Indices for the Estimation of Barley Biomass. Remote Sens. 2015, 7, 11449–11480. 
Mesas-Carrascosa, F.-J.; et al. Assessing Optimal Flight Parameters for Generating Accurate Multispectral Orthomosaicks by UAV to Support Site-Specific Crop Management. Remote Sens. 2015, 7, 12793-12814. 
Gonzalez-Dugo, V.; et al. Using High-Resolution Hyperspectral and Thermal Airborne Imagery to Assess Physiological Condition in the Context of Wheat Phenotyping. Remote Sens. 2015, 7, 13586-13605.
Dr. Mutlu Ozdogan
Guest Editor

Download All Published Papers 

How Remote Sensing Powers Precision Agriculture

Source: https://agfundernews.com/remote-sensing-powers-precision-agriculture.html by Joseph Byrum


Growers today can take advantage of the latest in technology to help them maintain an edge in an increasingly competitive marketplace. That means securing the benefits of the information revolution to deliver advances in crop yield and farm efficiency from precision growing techniques enabled by data – lots of data.
Those benefits, of course, are only going to be as good as the underlying data provided by sensors, the devices that obtain information about an area — a field — often from a distance in what’s called remote sensing.



              Joseph Byrum

This information is collected, compiled and turned into actionable intelligence through the use of advanced data analytics. Think of the navigation app on a smartphone. At heart, it is a data analytics tool that runs through all of the possible routes to between your current location and your destination, estimating how long it will take to take one road compared to the next. Yet these apps are only as good as the traffic data that let the system know one road is closed and another is clogged with traffic. The job of sensors is to provide that data.

Origins of Remote Sensing

Remote sensing may be a relatively new thing to agriculture, but the concept has been around for quite some time. Back in the mid-19th century, man took to the skies in hot air balloons with bulky, primitive cameras they used to survey the land below and create highly accurate maps. World War I commanders soon came to rely on photographs taken from biplanes and blimps to stay informed about the enemy’s battlefield movements and to plan artillery strikes. These commanders understood that, if you can’t see it, you can’t manage it.
Over the ensuing decades, technology advanced rapidly, but this principle remained the same. Cameras became far less cumbersome, and aircraft became far more capable. The “bird’s eye view” provided by these systems became exponentially more effective in managing tasks as diverse as construction, mining, and archeology. In the 1960s, surveying was boosted into orbit as satellites gave us, for the first time, a look at the entire planet at a single glance, which provided insight into managing big picture issues from global temperatures to land use patterns.

The Potential for Remote Sensing in Agriculture

Modern sensing instruments have advanced far beyond simple photographic film. Today’s devices measure light, radiation, and heat by capturing different wavelengths of the electromagnetic spectrum. Ongoing electronics miniaturization and the popularity of commercial drones have made this equipment increasingly affordable, but the usefulness of these devices in agriculture took the greatest leap with the advent of GIS (geographical information systems), the technology that allows the organization and analysis of data and patterns related to specific locations on a map — such as a field. This made it far easier for combined systems to deliver information — actionable intelligence — related to a grower’s specific needs.
Remote sensing devices take measurements throughout a field over time so that the grower can analyze conditions based on the data and take action that will have a positive influence on the harvest outcome. For instance, sensors can serve as an early warning system allowing a grower to intervene, early on, to counter disease before it has had a chance to spread widely. They can also perform a simple plant count, evaluate plant health, estimate yield, assess crop loss, manage irrigation, detect weeds, identify crop stress and map a field.
A variety of sensors is available to perform one or more of these tasks. Which one will a grower need? It all depends. A small-scale vegetable farmer will have different needs than a commercial grain farmer managing multiple fields.

Sensor Platforms

Sensors can be grouped according to their enabling technology — ground sensors, aerial sensors and satellite sensors. Ground sensors are handheld, mounted on tractors and combines, or free-standing in a field. Common uses for these include evaluating nutrient levels for more specific chemical and nutrient application, measuring weather, or the moisture content of the soil.
Aerial sensors have become far more affordable with the advent of drone technology that places the bird’s-eye view of a field within reach of most farmers. They are also attached to airplanes, another relatively cheap option. The systems are capable of capturing high-resolution images and data slowly enough, at low altitude, to enable thorough analysis. Typical uses include plant population count, weed detection, yield estimates, measuring chlorophyll content and evaluating soil salinity. The downside of aerial platforms is that wind and cloud cover can limit their use.
Satellite sensors provide coverage of vast land areas and are especially useful for monitoring crops status, calculating losses from severe weather events and conducting yield assessments. Initially, such systems were tailored to the needs of the military and government, not agriculture. So the main downside, aside from cost, was that these systems were tasked in advance — usually months — to look at a specific area at a certain time. Worst of all, cloud cover could ruin that expensive purchase. Now many governments have opened up satellite imaging databases to the public, providing an important and accessible resource for understanding crop conditions.

The Sensors Themselves

As with the choice of platform, appropriate sensor types will vary from farm-to-farm. A grower must ask what he intends to measure, and why, and which sensor type is best suited to the crop management and planning task at hand. In the past, there wasn’t a choice. The only sensor was camera film that captured a narrow slice of the electromagnetic spectrum — visible light. Now sensors go far beyond that, measuring short-wavelength gamma radiation at one end and low-frequency radio waves at the other.
Farmers find the most useful information closer to the visible spectrum, as color can be used to measure a plant’s chlorophyll levels and provide insight into a plant’s health and growth status. Simple red-green-blue sensors can provide color information, but more sophisticated data are available by peering into the near-infrared and short-wave infrared spectral bands.
The way leaves reflect light in the infrared spectrum changes if a plant’s cell structure is damaged, or if its water content is abnormal. The most consistent mathematical model to express this is called the normalized difference vegetation index, or NDVI. With near-infrared and red-edge (NIR and RE) sensors, NDVI can identify stressed crops much more precisely, giving the farmer more time to take corrective action.
The same sensors can also be used to identify a soil salinity problem over time, which can be a sign of poor irrigation that threatens crop yields. Salty soil has a higher reflectance than normal soil, a difference that shows up in the infrared spectrum and on thermal cameras.
Thermal cameras peer into long-wavelength infrared bands and measure heat, often represented as colors. So a farmer, shortly after irrigating his field, can send a drone equipped with a LWIR sensor and readily see, from the colors on the map, areas in the field that aren’t receiving enough water. This helps get the irrigation right from the beginning, preventing yield loss.
Radar and microwave sensors on satellites cut through weather conditions and provide powerful agricultural monitoring data across entire continents. Some sensors are designed to capture broad swaths of the electromagnetic spectrum, while others are tailored to measuring narrow slices that are relevant to a specific type of analysis. Which one is right for the grower is going to depend on what the grower intends to accomplish.
Outside the electromagnetic spectrum, there are the ubiquitous GPS sensors that offer precise positional data that make things like self-driving tractors possible. Various types of weather stations allow the logging of environmental conditions, such as rain, temperature, sunlight and other factors that provide insight into crop performance.

What’s the Right Sensing Option?

The more sophisticated the sensor, the higher the cost. Farmers must always weigh the potential for increased yield against the capital investment cost of each sensing platform.
In many ways, the problem is that a farmer interested in remote sensing has too many options. The most effective use of remote sensing would be to have a collection of sensors measuring multiple aspects of the plant growing process, but so far technology suppliers are not making this easy. They tend to offer proprietary solutions that don’t play well with sensors from other vendors. That leaves farmers in a Wild West situation, with no data standards and limited interoperability. Data sharing is inconvenient, if not impossible.
As is always the case, the market can sense these needs, and some startup companies have formed to offer integrated hardware and software solutions. It will just take time for these packages to mature and become viable, high-value propositions for most growers. Likewise, organizations like the Open Data Institute are working to promote open standards that will promote agricultural innovation.
Sensing technologies are evolving rapidly, which means it’s often up to farmers to use trial and error to determine what off-the-shelf products can deliver a “quick fix,” and which can be scientifically validated to contribute to increasing yield and profits over time.
==========
Editor’s Note: Joseph Byrum is senior R&D and strategic marketing executive in Life Sciences – Global Product Development, Innovation, and Delivery at Syngenta. He is a regular contributor to AgFunderNews.

RAND Corporation: U.S. Commercial Remote Sensing Satellite Industry



American firms have begun to operate their own imaging satellite systems, aiming to become an important part of the U.S. commercial remote sensing industry. To succeed over the long run, these new U.S. commercial remote sensing satellite firms need a combination of reliable technologies, government policies that encourage U.S. industry competitiveness, a strong international presence, and sound business plans to ensure their competitiveness in both the domestic and international marketplaces. The greatest risks for the these firms come from the challenge of transforming themselves from imagery data providers to strong competitors as information age companies; the need to master the technical risks of building and operating sophisticated imaging satellite systems; and the requirement to operate effectively in a complex international business environment. In addition, the government's policymaking process has yet to achieve the degree of predictability, timeliness, and transparency that the firms need if they are expected to operate effectively in a highly competitive and rapidly changing global marketplace. The authors conclude with six recommendations that the U.S. Department of Commerce should adopt to best fulfill its responsibilities for promoting the U.S. commercial remote sensing industry and for encouraging the competitiveness of new private imaging satellite firms.  Source: https://www.rand.org/pubs/monograph_reports/MR1469.html  


Precision Farming: Key Technologies & Concepts

Source: http://cema-agri.org/page/precision-farming-key-technologies-concepts



 High precision positioning systems (like GPS) are the key technology to achieve accuracy when driving in the field, providing navigation and positioning capability anywhere on earth, anytime under any all conditions. The systems record the position of the field using geographic coordinates (latitude and longitude) and locate and navigate agricultural vehicles within a field with 2cm accuracy.
 Automated steering systems: enable to take over specific driving tasks like auto-steering, overhead turning, following field edges and overlapping of rows. These technologies reduce human error and are the key to effective site management:
  • Assisted steering systems show drivers the way to follow in the field with the help of satellite navigation systems such as GPS. This allows more accurate driving but the farmer still needs to steer the wheel.
  • Automated steering systems, take full control of the steering wheel allowing the driver to take the hands off the wheel during trips down the row and the ability to keep an eye on the planter, sprayer or other equipment.
  • Intelligent guidance systems provide different steering patterns (guidance patterns) depending on the shape of the field and can be used in combination with above systems. 
Caption: Seeder using a geomapping system
• Geomapping: used to produce maps including soil type, nutrients levels etc in layers and assign that information to the particular field location. (see picture on the left)
• Sensors and remote sensing: collect data from a distance to evaluating soil and crop health (moisture, nutrients, compaction, crop diseases). Data sensors can be mounted on moving machines.
• Integrated electronic communications between components in a system for example, between tractor and farm office, tractor and dealer or spray can and sprayer.
 Variable rate technology (VRT): ability to adapt parameters on a machine to apply, for instance, seed or fertiliser according to the exact variations in plant growth, or soil nutrients and type.

Precision Agriculture: An International Journal on Advances in Precision Agriculture 

Remote Sensing Trends in Agriculture (.pdf)


Airborne Sensors and Satellite Missions

Miscellaneous Commercial Remote Sensing Resources:
List of Airborne Sensors
List of Satellite Missions
DigitalGlobe  API
Landsat LookViewer
SPOT
Planet Labs (includes: SkyBox, TerraBella, Blackkridge) API
Geosys
Deimos Imaging
Google Earth API
ALOS PALSAR  Advanced Land-Observing Satellite Phased Array type L-band SAR 
NOAA AHVRR
e-Geos

China National Space Administration (CNSA)
European Space Agency (ESA)
Japan Aerospace Exploration Agency (JAXA)
National Aeronautics and Space Administration (NASA)
ROSCOSMOS (RFSA)
urthcast

ImageSat
Sentinel Hub
LandInfo

US National Geospatial-Intelligence Agency  Commercial GeoInt Activity 
USGS Earth Resources Observation and Science (EROS) Center
US Department of Commerce, Office of Remote Sensing
US Geological Survey, Commercial Remote Sensing Space Policy
University of Illinois Laboratory for Agricultural Remote Sensing
USDA EPIC & APEX Models

ESRI ArcGIS API
QGIS Python Console






The Holographic Principle: Why Deep Learning Works

Source: Intuition Machine

The Holographic Principle: Why Deep Learning Works

Carlos E. Perez

What I want to talk to you about today is the Holographic Principle and how it provides an explanation to Deep Learning. The Holographic Principle is a theory (see: Thin Sheet of Reality) that explains how quantum theory and gravity interact to construct the reality that we are in. The motivations for this theory comes from the paradox that Hawking created when he theorized that black holes would emanate energy. The fundamental concept that had been violated by Hawking’s theory was that information was destroyed. As a consequence of this paradox, through several decades of research and experimentation, physicists have brought forth a unified theory of the universe that is based on information theoretic principles. The entire universe is a projection of a hologram. It is entirely fascinating that the arrow of time and the existence gravity are but mere manifestations of information entanglement!
Now, you may be mistaken to think that this Holographic Principle is just some fringe idea from physics. It appears at first read to be quire a wild idea! Apparently though, the theory rests on very solid experimental and theoretical underpinnings. Let’s just say that Stephen Hawking who first remarked that is was ‘rubbish’ has finally agreed to its conclusions. So at this time, it should be relatively safe to start deriving some additional theories of this principle.
One surprising consequence of this theory is that the hologram is able to capture the dynamics of the universe that has of the order of d^N degrees of freedom (where d is the dimension and N is the number of particles). One would think that the hologram would be of equal size, but it is not. It is a surface area and is proportional only to N². This begs the question, how is an structure of order N² able to capture the dynamics of a system in d^N?
In the meantime, Deep Learning (DL) coincidentally has a similar mapping problem. Researchers don’t know how it is possible for DL to perform so impressively well considering the problem domain’s search space has an exceedingly high dimension. So, Max Tegmark and Henry Lin of Harvard, have volunteered their own explanation “Why does deep and cheap learning work so well?” In their paper they argue the following:
… although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through “cheap learning” with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings.
The authors bring up several promising ideas like the “no-flattening theorems” as well as the use of information theory and the renormalization group as explanations for their conjecture. I however was not sufficiently convinced by their argument. The argument assumes that all problem data follows ‘natural laws’, but as we all know that DL can be effective in unnatural domains. See, Identifying cars, driving, creating music and playing Go as trivial examples of clearly an unnatural domain. To be fair, I think that they were definitely on to something, and that something I discuss in more detail .
In this article, I make a bold proposal with an argument that is somewhat analogous to what Tegmark and Lin proposed. Deep Learning works so well because of physics. However, the genesis of my idea is that DL works because it uses the leverages the same computational mechanisms underlying the Holographic Principle. Specifically, the capability of representing an extremely high dimensional space (i.e. d^N) with a paltry number of parameters of the order N².
The computational mechanism underpinning the Holographic Principle can be most easily depicted through the use of Tensor Networks (note: These are somewhat different from the TensorFlow or the Neural Tensor Network). Tensor network notation is as follows:




Source: http://inspirehep.net/record/1082123/

The value of tensor networks in physics is that they are used to drastically reduce the state space into a network that focuses only on the relevant physics. The primary motivation behind the use of Tensor Networks is to reduce computation. A tensor network is a way to perform computation in a high dimensional space by decomposing a large tensor into smaller more manageable parts. The computation can then be performed with smaller parts at a time. By optimizing each part one effectively optimizes the full larger tensor.
In the context of the holographic principle, the MERA tensor is used and it is depicted as follows:




Source: http://inspirehep.net/record/1082123/

In above the circles depict “disentanglers” and the triangles “isometries”. One can look at the nodes from the perspective of a mapping. That is the circles map matrices to other matrices. The triangles take a matrix and map it to a vector. The key though here is to realize that the ‘compression’ capability arises from the hierarchy and the entanglement. As a matter of fact, this network embodies the mutual information chain rule:




Source: https://inspirehep.net/record/1372114/

In other words, as you move from the bottom to the top of the network, the information entanglement increases.
I’ve written earlier about the similarities of Deep Learning with ‘Holographic Memories’ however here I’m going to make one step further. Deep Learning networks are also tensor networks. Deep Learning networks however are not as uniform as a MERA network, however they exhibit similar entanglements. As information flows from input to output in either a fully connected network or a convolution network, the information are similarly entangled.
The use of tensor networks has been studied recently by several researchers. Miles Stoudenmire wrote a blog post: “Tensor Networks: Putting Quantum Wavefunctions into Machine Learning” where he describes his method applied to MNIST and CIFAR-10. He writes about one key idea about this approach:
The key is dimensionality. Problems which are difficult to solve in low dimensional spaces become easier when “lifted” into a higher dimensional space. Think how much easier your day would be if you could move freely in the extra dimension we call time. Data points hopelessly intertwined in their native, low-dimensional form can become linearly separable when given the extra breathing room of more dimensions.
Amnon Shashua et al. have also done work in this space. Their latest paper (Oct 2016) “Tensorial Mixture Models” proposes a novel kind of convolution network.
In conclusion, the Holographic Principle, although driven by quantum computation, reveals to us the existence of a universal computational mechanism that is capable of representing high dimensional problems using a relatively low number of model parameters. My conjecture here is that this is the same mechanism that permits Deep Learning to perform surprisingly well.
Most explanations about Deep Learning revolve around the 3 Ilities that I described here. These are expressibility, trainability and generalization. There is definitely consensus in “expressibility”, that is of a hierarchical network requiring less parameters that a shallow network. The open questions however are that of trainability and generalization. The big difficulty in explaining away these two is that they don’t fit with any conventional machine learning notion. Trainability should be impossible in a high-dimensional non-convex space, however simple SGD seems to work exceedingly well. Generalization does not make any sense without a continuous manifold, yet GANs show quite impressive generalizations:




Credit: https://arxiv.org/pdf/1612.03242v1.pdf

The above figure shows the StackGAN generating, given text descriptions , output images in two stages. For the StackGAN there are two generative networks and it is difficult to comprehend how the second generator captures only image refinements. There are plenty of unexplained phenomena like this. The Holographic Principle provides a base camp to a plausible explanation.
The current mainstream intuition of why Deep Learning works so well is that there exists a very thin manifold in high-dimensional space that can represent the natural phenomena that it is trained on. Learning proceeds through the discover of this ‘thin manifold’. This intuition however breaks apart considering the recent experimental data (see: “Rethinking Generalization). The authors of the ‘Rethinking Generalization) paper write:
Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels.
Both the Tegmark argument and the ‘Thin Manifold’ argument cannot possibly work with random data. This thus lead to the hypothesis that there should exist an entirely different mechanism that is reducing the degrees of freedom (or problem dimension) so that computation is feasible. This compression mechanism exists can be found in the structure of the DL network, just like it exists in the MERA tensor network.
Conventional Machine Learning thinking is that it is the intrinsic manifold structure of the data that needs to be discovered via optimization. In contrast, my conjecture claims that the data is less important, rather it is the topology of the DL network that is able to capture the essence of the data. That is, even if the bottom layers have random initializations, it is likely that the network should work well enough subject to a learned mapping at the top layer.
In fact, I would even make a bigger leap in that in our quest for unsupervised learning, we may have already overlooked the fact that a neural network has already created its own representation of the data at onset of random initialization. It is just our inability to interpret that representation that is problematic. A random representation that preserves invariances (i.e. locality, symmetry etc.) may just be a good as any other representation. Yann LeCun’s cake might already be present and that it is just the icing and cherry that needs to explain what the cake represents.
Note to reader: In 1991, psychologist Karl Pribham with physicist David Bohm had speculated about Holonomic Brain Theory. I don’t know the concrete relationship between the brain and deep learning. So I can’t make the same conclusion that they made in 1991.
References
https://arxiv.org/pdf/1407.6552v2.pdf Advances on Tensor Network Theory: Symmetries, Fermions, Entanglement, and Holography

Machine Learning Algorithms


Predictive Analytics 101

by Ravi Kalakota

https://practicalanalytics.wordpress.com/predictive-analytics-101/

Data Mining

Source: Dr. Saed Sayad http://www.saedsayad.com/

An Introduction to Data Mining