An Architecture for Early Wildfire Detection and Spread Estimation Using Unmanned Aerial Vehicles, Base Stations, and Space Assets

Article Open Access

An Architecture for Early Wildfire Detection and Spread Estimation Using Unmanned Aerial Vehicles, Base Stations, and Space Assets

Author Information
Department of Aerospace Science and Technology, National and Kapodistrian University of Athens, 10679 Athens, Greece
Authors to whom correspondence should be addressed.
Drones and Autonomous Vehicles 2024, 1 (3), 10006;

Received: 19 April 2024 Accepted: 31 May 2024 Published: 05 June 2024

Creative Commons

© 2024 by the authors; licensee SCIEPublish, SCISCAN co. Ltd. This article is an open access article distributed under the CC BY license (

ABSTRACT: This paper presents, an autonomous and scalable monitoring system for early detection and spread estimation of wildfires by leveraging low-cost UAVs, satellite data and ground sensors. An array of ground sensors, such as fixed towers equipped with infrared cameras and IoT sensors strategically placed in areas with a high probability of wildfire, will work in tandem with the space domain as well as the air domain to generate an accurate and comprehensive flow of information. This system-of-systems approach aims to take advantage of the key benefits across all systems while ensuring seamless cooperation. Having scalability and effectiveness in mind, the system is designed to work with low-cost COTS UAVs that leverage infrared and RGB sensors which will act as the primary situational awareness generator on demand. AI task allocation algorithms and swarming-oriented area coverage methods are at the heart of the system, effectively managing the aerial assets High-level mission planning takes place in the GCS, where information from all sensors is gathered and compiled into a user-understandable schema. In addition, the GCS issues warnings for events such as the detection of fire and hardware failures, live video feed and lower-level control of the swarm and IoT sensors when requested. By performing intelligent sensor fusion, this solution will offer unparalleled reaction times to wildfires while also being resilient and reconfigurable should any hardware failures arise by incorporating state of the art swarming capabilities.
Keywords: SoS Architecture; Thermal Imaging; LEO Satellite; UAV; AI; Wildfire 

1. Introduction

The world globally is currently confronting an unprecedented wildfire crisis, characterised by its alarming frequency, intensity, and widespread devastation. Every year, enormous tracts of grasslands, forests, and other natural landscapes are destroyed, endangering human life, biodiversity, and the fundamental stability of ecosystems. As shown in Figure 1, the most recent fire data confirm that fires are becoming more widespread and frequent, burning almost twice as much tree cover today as 20 years ago [1]. The consequences go well beyond the initial devastation and include long-term ecological damage, flash floods, soil erosion, and landslides, among other secondary effects. A combination of causes, such as land-use patterns, human activity, and climate change, can be blamed for the increase in wildfire incidence. Several factors, including extended droughts, rising temperatures, and changed precipitation patterns, have made it easier for fires to start and spread quickly. Human activities like negligent cigarette disposal, neglected campfires, and industrial operations have increased the danger of igniting as urban areas push into wildland regions. The ramifications of wildfires are extensive and diverse. Besides their immediate effects, wildfires give rise to several other risks that exacerbate the crisis’s overall severity. Massive fires can change the nature of the soil, making it more prone to landslides and erosion during rainstorms. The reduction of greenery intensifies surface runoff, resulting in flash floods that endanger downstream agricultural and human populations. The cascading impacts render the development and implementation of effective techniques for supporting both the short-term suppression of fires and the long-term resilience and populations of ecosystems to be a key component of the wildfire management plan [2].
Figure 1. Tree cover loss due to fires compared to other drivers of loss, 2001–2022 [1].
Satellite-based fire detection systems have gained significant acknowledgment in recent years, offering near real-time monitoring of fire outbreaks across vast territories. Specifically, Earth observation satellites have been employed to scan potential wildfire activity not only for tracking the hot spot location during the event facilitating suppression but also to both predict the high-risk regions and to monitor the post-fire impacts, including mapping of the total burned areas and subsequently the forest’s recovery in a time scale. These satellite systems leverage advanced imaging technologies to detect heat signatures indicative of wildfires, enabling rapid response and mitigation efforts. Although numerous efforts are underway to address the wildfire crisis by Earth observation satellites, they often operate isolated without the involvement of other tools and technologies to select and manage wildfire data. Apart from satellite constellations, other innovative approaches with growing adoption in forest fire management are Unmanned Aerial Vehicles (UAVs) and Internet of Things (IoT) devices. UAVs demonstrate acknowledgment for their effectiveness in wildfire management missions. Drones can be equipped with specialised sensors offering targeted and cost-effective alternative solutions to satellite-based systems for surveillance and reconnaissance in remote or inaccessible terrain. By flying at lower altitudes and closer to the target area, UAVs can provide high-resolution imagery with enhanced spatial and temporal resolution [3]. In addition to this, they can scan areas that are not accessible to humans, rendering the fire monitoring task safer. Several attempts have been made to use drones to detect hot spots in fires. DJI Matrice series equipped with thermal imaging cameras are designed to fast spot hotspots and firelines. These UAVs can fly at relatively low altitudes, capturing thermal signatures indicative of wildfire activity even in rugged terrain or dense vegetation areas where satellite imagery may be less effective [4]. Furthermore, UAV missions can be strategically planned to cover the regions of interest, providing targeted surveillance and monitoring capabilities [5]. While these individual approaches have demonstrated efficacy in certain contexts, they often lack integration and coordination, resulting in fragmented wildfire management strategies. This work proposes an autonomous and scalable swarm monitoring system for early detection and spread estimation of wildfires by leveraging low-cost UAVs, satellite data, and ground sensors. Section 2 presents related work addressing multiple technology sectors. Section 3 outlines the problem statement. Section 4 provides an overview of the proposed system, explaining its underlying architecture and subsystems. Section 5 presents the interfaces responsible for generating situational awareness information (fire detection and more) while explaining how to take advantage of them. In Section 6 the communication interfaces and architecture are presented. Lastly, Section 7 includes discussion points and conclusions.

2. Related Work

Small satellite systems offer a wide range of opportunities for new services due to the low demands in size, mass, and power. The capability of their mass production significantly reduces their costs, facilitating the deployment of large small-satellite constellations or clusters. Such architecture benefits various scientific missions, including forest fire monitoring. Additionally, satellite constellations can expand the remote sensing capability by enhancing the imaging spatial resolution and increasing the revisit frequency above the area of interest. An extra advantage when addressing fire crises with minimum human intervention is the subsequent reduction in injury and death risk. Intersatellite link capability among satellites with a well-designed communication network is necessary to successfully operate a satellite constellation mission, particularly for granting high autonomy levels. Radhakrishnan et al. proposed an Open System Interconnection (OSI) model for overcoming the challenge of applying effectively an inter-satellite link communication network in a heterogenous satellite constellation system considering various design parameters such as antenna selection, modulation scheme and information coding modulation [6]. Companies such as Maxar have already deployed satellites equipped with panchromatic, VNIR bands, SWIR bands, and CAVIS bands, which are also comprised of thermal imaging aiming to improve the segmentation and classification of land to provide imaging and environment-monitoring capabilities. Maxar’s WorldView-3 satellite demonstrates imaging resolution up to 40 cm and contributes to fire detection, comprising innovative methods such as deep learning algorithms [7]. Other companies proposing satellite-based firefighting solutions, such as OroraTech, are taking advantage of constellations of thermal imaging satellites, allowing near-real-time detection of affected areas. With each satellite, out of a total of 14 nanosatellites in the constellation, equipped with a thermal imager covering a width of 400 km, the mission aims to respond during the peak burn period in the afternoon, and it benefits from a global network of ground stations for efficient data downlinking [8]. The cooperation of more than one satellite in the form of a constellation is a highly observed method for obtaining fire data, as it significantly reduces the revisit time of the satellites over the target area and increases the collected data. In addition, the study presents that minimising the detection time is crucial for fire management since the sooner a fire onset is located, the better the extinguishing process is. Therefore, the short, low revisit time could provide appropriate data to deal with fire [9]. One of the current most used satellite constellations for wildfire detection is Sentinel-2, which was developed and designed by the European Space Agency and the Astrium. Sentinel-2 has operated since 2005 as a part of the European GMES devoted to monitoring the land environment. In addition to its utility in obtaining wildfire-related data, this mission supports products such as land cover mapping, land use, change detection, and geophysical variables. The main objective of the mission is to systematically cover the earth’s surface (from −56° to +83° latitude) to produce cloud-free images every 15 to 30 days over Europe. Sentinel-2 currently consists of four satellites (Sentinel-2A-2B-2C-2D) equipped with an MSI (Multispectral Imager), offering a swath of 290 km, with a spatial resolution of 10–20 m, as well as a 13 optical channel instrument operating from visible-near-infrared to short-wave infrared. In addition, the altitude at which they operate is approximately 787 km, and the ability to offer data from four satellites provides improved revisit time and area coverage [10]. A single UAV approach can not grant the coverage requirement since the achieved field of view and flight time are limited. UAV swarms can reduce the impact of the discussed restrictions when their flight path is suitably designed for the independent monitoring of a targeted area. One of the first projects that exploited UAVs was a European one called COMETS (real-time coordination and control of multiple heterogeneous UAVs). The study used a team of heterogenous drones for forest remote fire detection, operated by human controllers to navigate the drones to the targeted region [11]. Authors in [12] proposed a model that uses multiple UAVs to detect fire events autonomously, achieving nearly full coverage of the targeted zone area. The basic concept is the deployment of a set of heterogeneously designed UAVs where the first drone group follows a higher altitudes flight path, scanning the surface rapidly and acting as the decentralised leaders of the UAV coalition and the second group to be closer to earth to provide more detailed imaging when indicated by the first group . Moreover, authors in [13] studied fire detection by employing a novel analytical approach that bounds wildfire propagation uncertainty, allowing for precise real-time monitoring and tracking and reducing error residuals. The framework utilises a distributed control algorithm for UAVs, ensuring the effective monitoring of the dynamic fire structure, including firefront propagation . Nguyen et al. described a formation of UAVs strategically positioned to monitor the spread of a wildfire. These systems are designed to track the outer edges of a wildfire while ensuring complete coverage of the affected area. Moreover, they are programmed to maintain safe distances from each other to prevent any potential conflicts [14]. This solution could also utilise technologies like the Single Shot MultiBox Detector (SSD) using the MobileNets network to enhance the speed and accuracy of fire detection. This method enables real-time fire detection and low-cost systems [15]. On the other hand, IoT provides in situ data collection related to fire ignition and spread, such as temperature, humidity, wind speed, and moisture levels [16]. One notable example of IoT implementation for wildfire detection is the FireLine system developed by the High-Performance Wireless Research and Education Network (HPWREN) [17,18]. DRYAD provides a network of IoT sensors placed in remote areas prone to wildfires and continuously monitors these environmental conditions. The received data is transmitted wirelessly to a central command centre and analysed in real-time to detect anomalies indicative of potential fire activity. Utilizing advanced algorithms and machine learning techniques, DRYAD predicts fire risk and provides early warning alerts through cloud-based platforms [19]. Moreover, BurnMonitor is a project under testing that proposes a similar IoT-based solution that utilises a network of IoT sensors for early fire detection since it provides real-time data. Besides, the BurnMonitor cloud combines weather data together with both vegetation and area data to extract a model that can predict fire expansion [20]. In the field of data processing, numerous global research efforts have been published with novel ideas for early wildfire detection. More specifically, utilising deep learning-based computer vision algorithms, one group focused on mitigating disastrous losses in human lives and environmental impacts, as demonstrated by the authors in [21], who employed UAV imagery. In a separate effort, researchers in [22] proposed a radio-acoustic sounding system with satisfying space and time resolution that can provide early detection of forest fires through remote thermal mapping. Similarly, another study [23] presents an automatic wildfire smoke detection method using computer vision and pattern recognition techniques, enabling near real-time detection and early warning systems. A low-cost approach utilising image processing and computational intelligence techniques was suggested by authors in [24], which effectively detects and identifies wildfire smoke from long-distance sequences, providing accurate early warning for emergencies. In a different direction, a sensor array that measures trace levels of fire-produced emissions coupled with computation and communication equipment can be a low power and low cost [25]. The emission arrays are calibrated and deployed in controlled fire detection situations. Authors in [26] introduced the FLAME dataset, which consists of aerial and thermal imagery captured by UAVs for fire detection in Northern Arizona’s forests, including video recordings and thermal heatmaps. The study demonstrates two applications: an artificial neural network for binary fire classification and a deep learning approach for fire mask extraction through image segmentation. The Burning area is framed and labelled in both image and video data. The presented binary fire classification model approaches up to 76% accuracy, while the FLAME method can achieve up to 92% accuracy. Briley and Afghah focused on developing a real-time image classification and fire segmentation model optimised for the deployment of UAVs [27]. By utilising hardware acceleration with the Jetson Nano P3450 and investigating the benefits of TensorRT, they aim to overcome UAVs’ computational and energy constraints. Through the systematic exploration of activation functions, quantisation techniques, and CUDA-accelerated optimisations, they have demonstrated improvements in efficiency and accuracy for image fire segmentation scenarios. Another research [28] presented a multilevel system for early wildfire detection designed and implemented within Edge Computing Infrastructure by employing NVIDIA Jetson modules and the YOLO-v5 multiple object detector. Additionally, [29] developed an automatic early warning system that incorporates multiple sensors along with state-of-the-art deep learning algorithms providing good accuracy in real-time data and employed within a drone to monitor forest fire as early as possible and report it to the concerned authority. Lastly, authors in [30] use a lightweight yet powerful architecture for a CNN, which can detect wildfires at an early stage by identifying flames and smoke at once.

3. Problem Formulation

As aforementioned, effective wildfire management requires a robust system that is scalable and fault-tolerant. It is important to use all available sensors and compile their data into valuable information, which calls for interoperability through predefined and generic message schemas, including telemetry, sensor, and status data. Additionally, it is necessary to know significant information regarding the geography, topography, terrain conditions, and weather of the targeted area. Lastly, the system should focus on quickly detecting wildfires, which minimises the revisit time for monitoring each area. 3.1. Early Detection of Wildfires and Revisit Time With the main goal being to detect wildfires at the earliest possible stage, this requires minimising the revisit time for each monitored area. This involves task allocation strategies where the Ground Control Station (GCS) dynamically assigns surveillance areas to drones based on the size, available sensors and vehicle capabilities. By optimising the flight paths and schedules of drones, the system can ensure that each area is revisited frequently enough to catch new fires promptly. Using as an example the specs of a very common thermal camera equipped in drones, such as the DJI Mavic 3 Thermal, the thermal camera has a diagonal FOV of 61° and a resolution of 640  512 pixels. The Horizontal FOV, which is needed for our calculations, can be computed to 49.4°. Knowing the Horizontal FOV, we can calculate the width of terrain that is covered by the UAV when the camera is pitched 90° based on above-ground level altitude, as seen in Equation (1).
```latexCoveredWidth=2\times altitude\times\tan(\frac{hfov\times\pi}{360})\times2```
In addition, the DJI Mavic 3 has a flight time of approximately 40 min while sustaining a velocity of 10 m/s and using Equation (2), we can calculate the square kilometres it can cover in a single charge.
```latexCoveredArea=\frac{CoveredWidth\times flight_{time}\times velocity\times60}{1,000,000}```
From the above parametric and our requirement of detecting a fire with a radius of at least 1 meter, we require at least 1 pixel per meter with a provision of 50%, which results in 1.5 pixels per meter. For the Mavic-3 thermal drone, the equation results in 10 square kilometres of the area covered per charge, while for a higher endurance fixed-wing UAV, such as the UCANDRONE PHOREAS, which can fly for 4 h at 15 m/s, the result is 60 square kilometres. As the UAVs can act as on-demand dynamic sensors, the system must consider the coverage capability and the data generated from the IoT devices and the LEO satellites to minimise revisit time. 3.2. Interoperability across Heterogeneous Agents Ensuring seamless data exchange between several agents such as drones, IoT thermal cameras, satellites, and the GCS is a significant challenge due to the diversity of communication protocols and message formats. sThe system must employ generic message schemas and standardisation techniques to achieve interoperability. A way to solve this is by predefining redefined JSON schemas and HTTP messages to enable consistent and efficient communication across different components. These standardised formats ensure that telemetry, sensor data, and status updates can be accurately and promptly shared, regardless of the originating device. In addition, the GCS must implement a middleware solution that translates between various protocols and formats in real-time to enhance the system’s ability to integrate heterogeneous agents seamlessly and securely. 3.3. Scalability and Robustness The system must handle an increasing number of devices and large volumes of data without degradation in performance. Telemetry and real-time video feeds pose particular challenges; high-resolution images and videos generate substantial data that can strain communication networks. To mitigate this issue, video data can be compressed using advanced algorithms that maintain critical information while reducing file sizes. Additionally, the use of computer vision techniques allows the system to analyze images locally on the drones, extracting and sending only essential information back to the GCS. This approach not only saves bandwidth but also accelerates the data processing pipeline, ensuring that the system remains responsive and efficient even as the number of monitored areas and devices scales up. The system is tasked with reducing bandwidth usage as much as possible by minimizing streaming video feed through computer vision methods and dynamically handling refresh rates from the telemetry data streams.

4. System Overview

4.1. Overall Architecture With the overarching aim of early wildfire detection and accurate spread estimation, the imperative of minimising revisit time and promptly issuing warnings upon fire detection cannot be overstated. Achieving this goal necessitates the implementation of a real-time and continuous data exchange mechanism among all sensors, ensuring an affordable and scalable solution. Furthermore, to bolster the comprehensive monitoring of expansive regions prone to wildfires, the integration of weather and topographic data is paramount. By harnessing such information, areas at heightened risk can be pinpointed with precision, allowing for proactive measures to be taken. During a wildfire event, the system can take a central role in providing necessary support to firefighting personnel. By accurately locating the fire and continuously assessing its direction of spread, the system can contribute significantly to the organisation of firefighting efforts with enhanced efficiency. sssIn addition, providing images from the operational area in real-time provides the relevant authorities with situational awareness, facilitating decision-making. In particular, the proposed system exhibits remarkable adaptability, dynamically covering extensive areas through intelligent task allocation, covering areas spanning hundreds of square kilometres. This combination of advanced technology and strategic development underscores its effectiveness as a front-line tool in wildfire management and mitigation efforts. The overall system comprises multiple subsystems, as seen in Figure 2. Starting with the Ground Control Station (GCS) serves as an interface provided to the end-users, gathering information from all the domains, performing data fusion, and presenting the compiled final result, allowing easier mission creation, management, and monitoring. In particular, GCS interfaces with various stakeholders, including local fire departments, thereby enhancing collaboration and information sharing. The NB-IoT modules are central to the system’s operational efficacy, manifested in the form of long-range Pan-Tilt-Zoom (PTZ) thermal cameras. These advanced modules, deployed strategically, preserve seamless communication with the GCS via robust cellular networks. Through their integration into the system, real-time surveillance capabilities are extended, allowing for swift detection and localisation of wildfire incidents. Moreover, the GCS’s interoperability with third-party entities enhances the system’s versatility and adaptability to operational requirements. The Space segment houses the constellation of Low Earth Orbit (LEO) satellites, which are adapted for fire detection as they operate close to the Earth’s surface, thus facilitating rapid data acquisition with minimal latency. With short revisit times, these satellites ensure frequent coverage of fire-prone areas, enabling early detection and response to fire outbreaks or changes in behaviour. Additionally equipped with advanced sensors such as thermal infrared imagers and hyperspectral cameras, LEO satellites detect fire-associated thermal signatures and provide information on vegetation health and soil characteristics.
Figure 2. Overview of the early wildfire detection system.
Functioning in tandem with the LEO satellite constellation, drone swarms emerge as indispensable assets in the quest for situational awareness, monitoring, and rapid response. When in a constellation, LEO satellites can provide high revisit time but, at the same time, better resolution performance. The considered sensors for the proposed solution include an infrared and hyperspectral sensor payload that will acquire high-quality data in each pass containing mapping information in the most critical bands for fire detection. The above results are enhanced by data collected from drones. Employed as on-demand reconnaissance platforms, drones traverse designated areas, swiftly identifying and assessing wildfire occurrences. Leveraging the high-speed communication infrastructure facilitated by LEO satellites, these drones relay critical data back to the GCS in near real-time, enabling prompt decision-making and effective resource allocation. Thus, through the synergistic collaboration of ground-based and satellite-enabled assets, the system attains unparalleled wildfire detection and management capabilities, ensuring swift and targeted response efforts. Finally, Model-Based Systems Engineering employs visual modelling throughout the system development life cycle to enhance system design, analysis, and documentation. Due to this concept architecture, having multiple subsystems is crucial to leverage MBSE [31] since it offers a methodical approach for gathering, arranging and handling intricate system data. It entails building models that depict several facets of the system, such as its interactions, architecture, needs, and behaviour. An example showcase of the system is given in Figure 3. The constellation of LEO observation satellites provides precise fire detection and spread estimation through Hyperspectral imaging, Infrared and Near Infrared sensors, geospatial information, and weather conditions. Moreover, an array of ground sensors, such as fixed towers equipped with infrared cameras and IoT sensors strategically placed in areas with a high probability of wildfire, will work in tandem with the space domain as well as the air domain to generate an accurate and comprehensive fire surveillance flow of information. The air domain, equipped with low-cost COTS UAVs that leverage infrared and RGB E/O sensors, will act as the primary situational awareness generator on demand. AI task allocation algorithms and swarming-oriented area coverage methods will be executed through an end-user-centric event-based communication architecture. High-level mission planning takes place in the GCS, where information from all sensors is gathered and compiled into a user-understandable schema. In addition, the GCS issues warnings for events such as fire detection and hardware failures, live video feed, and lower-level control of the swarm and IoT sensors when requested. By adopting multiple domains and performing intelligence, sensor fusion aims to offer unparalleled reaction times to wildfires while being resilient and reconfigurable if any hardware failures arise by incorporating state-of-the-art swarming capabilities.
Figure 3. Example showcase of the proposed system for early wildfire detection.
4.2. Space Domain Imaging by satellites in Low Earth Orbit allows a higher spatial resolution than satellites in higher altitudes, such as MEO and GEO. This capability can expand the type of collected data since they can be equipped with various available imagers and sensors. The increase in spatial resolution provides more detailed monitoring, which is crucial for detecting the hotspot onset within minutes before it spreads out, forming a crown fire. However, earth observation performed with a single satellite demonstrates limitations in coverage due to the restricted field of view of the sensor, which makes wildfire hotspot monitoring in a wide area of interest to be challenging. In addition, LEO satellites can not provide constant monitoring as it comes with GEO since they typically complete an orbit around the Earth in approximately 90 to 120 min. In the context of wildfire management, where timely information is of the utmost importance, the effectiveness and efficiency of such a system is essential. To address the limitations and provide near real-time monitoring of large expanses of land, the proposed solution is deploying a LEO satellite constellation. By orbiting closer to Earth’s surface, each satellite can revisit specific areas multiple times within a short timeframe while the constellation system with the ISL enables multiple targeted area monitoring and the timely detection of fire outbreaks or changes in wildfire behaviour and timely alerting. The combination of real-time data acquisition and rapid revisit capability is critical to the early detection and early response to wildfire incidents, empowering fire services and law enforcement authorities to mitigate the spread of fires and protect lives and property, particularly in dynamic wildfire environments where conditions can evolve rapidly, requiring flexible and adaptive response strategies. Moreover, the integration of advanced sensors onboard LEO satellites enhances the efficiency and accuracy of wildfire detection and monitoring. These sensors, ranging from thermal infrared imagers to multispectral cameras, enable the detection of heat signatures associated with wildfires and provide valuable contextual information about vegetation health and land characteristics. By leveraging the synergy between satellite-based sensors and data processing algorithms, the system can detect, analyse, and offer critical information about wildfire activity in near real-time. Satellite constellations in low Earth orbit can collectively provide as a system multiple area coverage, thereby facilitating the simultaneous monitoring of extensive areas that are prone to fire or are affected by fire. Wide coverage allows satellites to quickly detect and monitor fire incidents in a variety of landscapes, including remote and inaccessible areas. Additionally, the proximity of LEO satellites to Earth’s surface ensures high spatial resolution, resulting in finer Ground Sampling Distance (GSD) essential for wildfire detection and monitoring. The detailed satellite imagery enables the identification of smaller fire hotspots, enabling firefighters to allocate resources effectively and prioritise suppression efforts. Moreover, the finer GSD facilitates the assessment of vegetation health and fire behaviour, aiding in the development of proactive wildfire management strategies and post-fire rehabilitation efforts. The use of hyperspectral satellite sensors for wildfire detection gives additional capabilities of wildfire image data processing by using RGB, VNIR, SWIR or fusion bands images, but also with the extraction of specific indices that are appropriate for wildfire detection. The HDFI (Hyperspectral Detection Fire Index) uses two specific SWIR bands of the hyperspectral spectrum, at 2060 nm and 2430 nm, HDFI = $$\frac{L_{2430}-L_{2060}}{L_{2430}+L_{2060}}$$. HDFI ranges from −1 to 1 values, and the higher positive values indicate a possible wildfire. The CO2-CIBR (Carbon Dioxide Continuum-Interpolated Band Ratio) uses three bands at 1990 nm, 2010 nm and 2040 nm CO2-CIBR = $$\frac{L_{2010}}{0.666L_{1990}+0.334L_{2040}}$$, the higher CO2-CIBR index values indicate a high concentration of CO2, possibly caused by wildfire smoke. Another significant hyperspectral fire detection index is the NBR ( Normalised Burn Ratio), which uses the 1080 nm and 2020 nm NBR = $$\frac{L_{1080}-L_{2020}}{L_{1080}+L_{2020}}$$ and the negative values indicate possible burned areas. Finally, the SWIR band at 2490 nm saturated pixels indicates the active fire pixels, and the VNIR band at 411 nm high values indicates the smoke areas [32]. Intersatellite Links (ISLs) between satellites play an important role in enhancing the effectiveness of wildfire monitoring by enabling communication and data exchange between satellites within the constellation. ISL technology facilitates the rapid transmission of high-resolution images and real-time data, enabling the early detection and monitoring of fire incidents. This communication network enables cooperative efforts between satellites to provide comprehensive coverage and reliable information to agencies and individuals, enabling them to respond quickly and efficiently to emergencies. ISL contributes significantly to the near real-time fire detection requirement since cumulative operation reduces the impact due to the camera FOV limitation. The satellite that monitors the area of interest and does not establish a communication link with the ground station can send the information in case of wildfire detection to the nearby satellites of the same constellation and subsequently to the one that is expected to pass directly over the ground station sooner. As a result, the system can focus on the area of interest and scan for potential fire events longer and at the same time, the ground segment can receive the data through a neighbouring satellite [33]. 4.3. Air Domain In contrast to the satellite constellation, in the air segment, the UAVs can act as moving sensors that are able to reposition themselves based on gathered information dynamically. Due to their adaptability, agility, and capacity to reach difficult-to-reach areas that pose difficulties for ground-based monitoring systems, when outfitted with sophisticated thermal cameras, unmanned aerial vehicles function as flexible, on-demand sensors that can quickly search large regions for heat anomalies that might signal of impending fires. UAVs may be quickly deployed to possible hotspot locations, in contrast to conventional surveillance techniques, giving wildfire management teams real-time situational knowledge. Their capacity to effectively cover large areas and their adaptability to shifting climatic factors allow for proactive monitoring and early wildfire identification, which in turn allows for timely and focused firefighting operations. These airborne devices can minimise reaction times and maximise coverage by dynamically responding to incoming commands or smartly allocating particular regions to each UAV based on predefined priorities. By ensuring thorough observation of high-risk regions, this integrated strategy improves the efficacy of early wildfire detection operations. Furthermore, the versatility and usefulness of UAVs in wildfire control situations are further enhanced by their capacity to modify flight paths and sensor setups in real-time in response to changing fire behaviour and environmental circumstances, hence providing firefighters with updated information at all times. Fixed-wing aircraft and quadcopter UAVs have different benefits and drawbacks based on their endurance and agility qualities. Quadcopters are very agile and suitable for duties involving precision manoeuvring and close-quarters inspections because of their ability to take off and land vertically. They can conduct focused surveillance in difficult conditions and conduct in-depth aerial surveys because of their capacity to hover and manoeuvre in small places. In contrast to fixed-wing aircraft, quadcopters usually have shorter flight durations because of their reliance on battery power, which limits their durability. On the other hand, fixed-wing UAVs have a longer range and longer endurance, which makes them perfect for long-duration missions and covering big regions. Longer flight durations are made possible by their effective aerodynamic design, which permits thorough aerial surveys and surveillance over large areas. Nonetheless, fixed-wing UAVs generally have less agility than quadcopters and require more space for take-off and landing, limiting their suitability for tasks requiring precise manoeuvrability in confined areas. For these reasons, a combination of both fixed-wing and quadcopter UAVs is recommended to aid in the early detection of wildfires. Commercial Of The Shelf solutions offer great capabilities. Popular Quadcopter UAV solutions equipped with thermal cameras are able to scan areas at 10 m/s or above, depending on wing conditions, for 40 min at a time, which results in being able to cover 10 square kilometres at once. Fixed-wing UAVs can increase this endurance by upwards of 4 h or more while also being able to cruise faster at approximately 15m/s. This allows them to cover 60 square kilometres. In the proposed system, these UAVs must be able to follow issued trajectories autonomously, with only human supervision required for safety. 4.4. Land Domain In the context of direct and targeted wildfire detection and monitoring, NB-IoT modules are placed in the land section, with both thermal and RGB sensors integrated. IoT sensors in the form of PTZ (Pan-Tilt-Zoom) thermal cameras strategically positioned across key points enable comprehensive monitoring of terrains for potential wildfire detection. Placed strategically, these cameras provide expansive coverage, allowing for swift identification of ignition points or early signs of fire spread across vast landscapes. This proactive approach not only aids in rapid response efforts but also facilitates timely intervention, potentially mitigating the extent of damage and safeguarding both lives and valuable ecosystems from the devastating impact of wildfires. Through cellular communication links, these cameras can interact with the GCS and offer area coverage of up to 110 square kilometres while also being low maintenance to run. While RGB (Red, Green, Blue) sensors excel in capturing visible light imagery, offering high-resolution visual data, thermal sensors augment this capability by detecting infrared radiation emitted by heat sources, thereby enabling the identification of fires even under adverse conditions such as dense smoke or foliage cover. More specifically, RGB sensors provide essential information, offering detailed visual data of the surrounding ground and vegetation. This visual data is instrumental in confirming thermal images, helping to locate and characterise fire incidents accurately. In addition, RGB sensors contribute to pre- and post-fire assessment efforts, facilitating environmental impact assessment and guiding recovery initiatives. In contrast, thermal sensors play a central role in wildfire detection due to their ability to detect thermal signatures from fires, regardless of the prevailing environmental conditions. This is especially important during night operations or under smoke, when visual images may be dark. In addition, thermal sensors allow temperature fluctuations to be monitored, facilitating the early detection of incipient fire spots that may escape visual detection. Moreover, the fusion of thermal and RGB sensor data enhances the overall efficacy of wildfire detection systems, enabling comprehensive situational awareness across a spectrum of environmental conditions. Thus, the integration of thermal and RGB sensors in the NB-IoT modules represents a critical advancement in enhancing the resilience and efficiency of wildfire detection and monitoring efforts, ensuring proactive mitigation of wildfire risks, and safeguarding vulnerable ecosystems and communities. Conventional control systems, which are intended to be single-use, soon hit their limits when several sensor systems and platforms are employed simultaneously in a complicated situation. First of all, because each system has a unique interface, each subsystem requires a separate console and a professionally trained operator. Second, combining and filing sensor and status data from several systems in a coordinated manner is a difficult operation. Because of their complexity, data transfer delays, and distance (sometimes several kilometres) to the action, controlling individual sensors and platforms from a single C2 interface is not feasible. An authority on the scene of the event is required to process the reconnaissance missions autonomously to alleviate operator load. This entails directing sensor platforms, managing sensors, and filtering and densifying sensor data so that the end-user receives only pertinent information for making choices, such as situation reports and alerts. When autonomy is involved, it is necessary to ensure safe teaming between all autonomous sensors and robust contingency plans through human supervision. A GCS that is able to handle information from multiple heterogeneous sensors [34] while also being end-user centric [35] by employing Multimodal Interface Technologies [36] will offer clear benefits to our proposed architecture. Lastly, deep learning approaches that are able to assist in decision-making or clustering of assets to enhance scalability and system robustness are crucial as seen in [37]. The Ground Control Station is a centralised hub equipped with all data links required to communicate with the UAVs, LEO satellites, IoT devices and more. It also employs sophisticated software for mission planning and monitoring. AI Task allocation algorithms and swarming-oriented area coverage methods are processed and integrated into the UI in an end-user centric manner. High-level mission planning takes place in the GCS, where information from all sensors is gathered and compiled into a user-understandable schema. In addition, the GCS issues warnings for events such as the detection of fire and hardware failures, live video feed, and lower-level control of the swarm and IoT sensors when requested. The mission planning software of the GCS divides the area of interest into subregions for single agent missions based on the drone’s characteristics (battery capacity, sensor accuracy, speed, wind resilience) for heterogeneous swarms and considering the coverage of the NB-IoT elements. The mission planning software generates trajectories to solve the coverage path planning problem. A surface elevation map is used for the 3D trajectory planning to maintain a constant flight level. This ensures the drones’ safety and the desired detection accuracy. The mission planning phase concludes with the mission approval through a human-in-the-loop process. During the mission, the end-user maintains human-on-the-loop oversight and retains the option to assume control of a drone at any given moment. During the mission the collected data are merged at the GCS to build a map of the fire, which is updated as new data are received. By integrating and receiving data from the available sources described above, such as ground-based Internet of Things (IoT) devices, unmanned aerial vehicles (UAVs), and constellations of low-orbiting satellites equipped with RGB, Hyperspectral, and Infrared cameras, it will greatly increase the speed in detecting fires as well as their detection and analysis. This integration of systems will enable the creation of comprehensive datasets that can be processed and merged to provide invaluable information on fire behaviour, spread, and impact assessment. This difficulty seems necessary and decisive for the management of large fires due to the increased frequency and severity of these phenomena worldwide. The data that will arrive at the Ground control station will be collected by three different systems, starting from the ground and reaching up to space. Initially, IoT devices deployed in fire-prone areas will be able to collect real-time environmental data such as temperature, humidity, wind speed, and air quality, as well as images of the surrounding space using Hyperspectral and RGB cameras, providing critical elements for fire risk assessment. In addition, unmanned aerial vehicles (UAVs) equipped with RGB, hyperspectral, and thermal cameras will provide targeted high-resolution images with spatial and temporal flexibility, allowing rapid response and detailed monitoring of fire incidents and their outcomes. The satellite constellation that will also be equipped with Hyperspectral, Infrared, and Near Infrared sensors will provide data covering a wide area with frequent visits, thus enabling continuous monitoring of large-scale fire events and aiding early detection and assessment. The combination of the data from all the systems and the sensors with which they are equipped will allow the extraction of additional information related to the characteristics of the fire and its outcome, such as the intensity of heat, the colour of the flame, the composition of smoke, but and information related to the surrounding area such as vegetation health, soil moisture and wind speed in the area in real-time. Additionally, these data can and do define the Normalised Difference Vegetation Index (NDVI), the calculation of which requires the use of multispectral data and helps quantify vegetation health and detect vegetation changes before and after wildfires [38]. It is a quantity that helps to assess the ecological impact in an area affected by a fire, assess and methodically plan its recovery, and identify the areas that have an increased risk for the outbreak of a new fire. This abundance of data, combined with the use of machine learning and deep learning algorithms, can lead to the training of fused multispectral datasets to classify burned areas, unburned vegetation, and other land cover types, enabling efficient mapping and analysis of areas affected by fire. to be able to predict future forest fire phenomena in the study areas successfully. All data that will be collected as well as those resulting from their processing, will lead to the creation and execution of new services and data, such as early warning systems. Fusion real-time data from IoT, UAV, and satellite sensors can power early warning systems, allowing authorities to detect and respond to fire incidents immediately, minimising damage and casualties. Additionally, modelling wildfire behaviour enables fire departments to predict fire spread patterns and design effective containment strategies. Continuous monitoring and analysis of multispectral data in conjunction with NDVI indices will enable accurate assessment of the environmental impacts of wildfires, including soil erosion, water quality degradation, and habitat loss. These assessments will be able to contribute both to the restoration of the environment and tο preventive measures for future wildfires.

5. Perception

In the challenge of managing a wildfire, the most critical task is early detection. Studies have shown that if a wildfire event is spotted in the first crucial minutes, the phenomenon can be addressed successfully, reducing the devastating impacts on human life and the ecosystem. Early detection allows firefighting teams to respond swiftly, deploying resources to contain the fire before it spreads uncontrollably. Additionally, advancements in sensor technologies, accompanied by machine learning techniques, have significantly improved the efficiency of early wildfire detection. These sensors can accurately identify the presence of a fire event, enabling timely intervention to mitigate the fire’s destructive potential. To address this issue effectively, it is vital to consider the unique characteristics of wildfire events and adopt a solution that allows their extraction, contributing to fast wildfire detection and alerting. As emphasised in the description of the holistic solution’s architecture for predicting and managing forest wildfires, the main focus is on early detection of the events since it is a key factor in reversing out-of-control fires. All the sensor systems considered, networks of satellites, and UAVs, along with machine learning, simulation, and modelling, when coordinated, can be a game-changer approach to combating wildfires. In this section, a thorough and detailed description of the sensors that are part of the proposed solution and integral to the study, monitoring, and management of forest fires will be carried out. 5.1. RGB Electro-optical imaging sensors are the most widespread and affordable sensors, which makes them a great candidate for early wildfire detection. In the context of this system, EO sensors in the form of high-resolution and refresh rate RGB cameras will be one of the primary sources of information attached to UAVs and IoT devices. The widespread availability and affordability of RGB cameras make them a feasible option for wildfire monitoring across diverse landscapes. Due to its ability to capture visual information across the red, green, and blue spectra, RGB monitoring offers several advantages for wildfire detection. Unlike single-sensor systems, these sensors also allow for long-range detection at a low cost, as they provide high-resolution images with rich detail, allowing for better identification of potential fire sources, smoke plumes, or early signs of combustion. In addition, their high-resolution image stream, combined with high FOV, allows them to cover large areas at once. The emitted smoke is the very first visible sign of fire onset. Smoke plumes and flames demonstrate distinct colour signatures. Smoke typically appears as grey or black against natural backgrounds, while flames emit specific hues of orange and red. By leveraging machine learning algorithms, RGB cameras can be trained to recognize these characteristic colours, allowing for the rapid and accurate identification of wildfire events. Additionally, RGB monitoring can capture subtle changes in vegetation colouration, indicating areas of heightened fire risk or recent burn activity. This advantage of RGB imagery enhances forest monitoring and supports early warning of potential ignition points. Furthermore, conventional RGB cameras can contribute to fire detection by detecting the light emitted by flames. Sudden increases in brightness within a specific area may be attributed to the generation of flames due to the onset or expansion of a wildfire [39]. sThrough the integration of AI algorithms, particularly multiple object detection CNNs, these cameras can analyse vast image datasets in real-time, swiftly identifying anomalies such as smoke plumes or alterations in vegetation patterns as well as fire patterns. To do so, these cameras are usually paired with onboard computers able to run inference on specialised hardware at low latency. Many models have been tested and proven efficient throughout the literature, the most prominent being the YOLO multiple object detector [40]. YOLO (You Only Look Once) is a state-of-the-art object detection algorithm renowned for its speed and accuracy in identifying objects within images or video frames. YOLO operates by dividing the image into a grid and predicting each grid cell’s bounding boxes and class probabilities. With its real-time processing capabilities, YOLO can efficiently analyse video feeds or image streams, making it well-suited for applications such as wildfire detection. By training the algorithm on a diverse dataset that includes images of wildfires, smoke, and relevant environmental features, YOLO can learn to recognize the distinct characteristics of wildfires, as seen in Figure 4. These may include features like smoke, plumes, flames, or changes in vegetation colour indicative of fire risk.
Figure 4. Fire detection using RGB camera and AI methods.
5.2. Infrared Infrared sensors are one of the most important instruments for wildfire detection as they are able to detect and image the thermal signatures emitted by fires, even in difficult conditions such as areas covered by dense smoke, with partial obstruction, or even during the night. Their operation is based on capturing infrared radiation emitted by objects on the Earth’s surface, including flares and hotspots, which are then converted into temperature data. This complementary function of the IR sensors with the RGB sensors, described above, can significantly improve the detection accuracy and reduce the computational cost within the system. Such sensors are often found in satellites; for example, the abovementioned Sentinel-2 mission is operating the IR and can detect infrared radiation emitted by objects, including flames and hotspots, and convert it into temperature data. Additionally, being integrated into the same gimbal/ Pan-Tilt mechanism makes their integration into UAVs and IoT devices seamless. Despite being comparatively more expensive and offering lower resolution than RGB cameras, the strategic combination of infrared sensors with zoom capabilities extends their application to long-range surveillance, providing accurate temperature signature information at the per-pixel level. Typically, the most common and cost-effective infrared sensors come at a low resolution of 640 by 512 pixels and a Diagonal FOV of 61°. As seen in Figure 5, infrared sensors can detect heat signatures that are not visible to the naked eye or in this case the RGB camera, thus underscoring their necessity in comprehensive fire monitoring strategies. Furthermore, while alternative methods such as neural network training may offer some ability to detect smoke and/or burnt ground, infrared sensors maintain superiority in accuracy and simplicity of integration, as there is no need for specialised onboard hardware to run inference on the video stream. In addition, infrared sensors provide native temperature data per pixel, facilitating and simplifying the data collection process and autonomous decision-making processes. Additionally, by leveraging the complementary advantages of IR sensors and RGB cameras through data fusion techniques, the potential to achieve extended coverage of large areas with exceptional accuracy is evident. This synergy between sensors not only enhances the effectiveness of fire detection systems but also enables comprehensive situational awareness and data to design successful early response strategies in fire management efforts.
Figure 5. Heat detection using infrared camera. Source: “Spotting Brush Fire Hot Spots with a FLIR Thermal Drone” by Teledyne FLIR.
5.3. Hyperspectral Hyperspectral imaging offers distinct advantages for fire detection due to its ability to capture detailed spectral information across a wide range of wavelengths. Unlike traditional RGB, multispectral and hyperspectral data provide a more comprehensive view of the electromagnetic spectrum, allowing for enhanced discrimination between various materials and surface features. In the context of wildfire detection and as seen in Figure 6, hyperspectral sensors can detect changes in vegetation health, temperature gradients, and smoke plumes with high precision. By analysing specific spectral signatures associated with burning vegetation and combustion products, hyperspectral imagery enables more accurate and reliable detection of fire events, even in challenging environmental conditions [41,42]. Furthermore, it facilitates the identification of pre-fire conditions and early-stage fire outbreaks through spectral indices and anomaly analysis. Through monitoring changes in vegetation health, moisture content, and thermal patterns over time, hyperspectral sensors can provide valuable insights into areas at heightened risk of wildfire ignition and propagation [43].
Figure 6. (<b>a</b>) Classification map and (<b>b</b>) wildfire fuel map from Hyperspectral data [41].
The proposed solution for achieving Near Real-time Fire Detection suggests satellites equipped with hyperspectral and IR imaging payloads to collect data focusing on NIR and Infrared spectrum. The contribution of both imagers is significant since different types of sensors exhibit different strengths when performing monitoring. sImaging in infrared imaging provides the ability to detect surface temperature anomalies even during the night, while a hyperspectral sponsor can produce mapping in a high range of spectral bands used to extract information for crucial remote sensing indices employed in fire prediction and detection. Onboard active fire detection algorithms can exploit the imaged data produced by both detectors after they are suitably preprocessed (e.g., spatiotemporal correlation and resampling) and use this combining information to provide reliable near real-time fire detection alerts both during the day and night and more effective wildfire management. For fire objects and other related phenomena such as smoke, detection, and determination, deep learning algorithms are used widely with satellites and aerial images. The most widely used algorithms for aerial and satellite object detection are the conventional CNNs (Convolutional Neural Networks) and the newer YOLO (You Only Look Once) algorithm. The main difference between these two algorithms is that the conventional CNN is trained only for image classification. Therefore, to detect and determine the object in the image, the model has to divide the image into small patches and then apply the CNN algorithm separately to each patch, something that is very time-consuming for onboard applications. In contrast, the YOLO algorithm has been trained to perform image classification and determine the frame that includes the specific object simultaneously. As a result, in the YOLO training phase, the images except for the specified object class label include additional separate files with the frame inside the object. More specifically, the bounding box data include the object midpoint x-image axis coordinate, the object midpoint y-axis coordinate, the bounding box width, and the bounding box height [44]. Up to now, many YOLO versions have been developed in order to achieve the best real-time object detection efficiency, with about 16 versions and variances that differ in their architecture, training, and datasets [45]. YOLO-V5 algorithms have been used with aerial vehicles’ real-time fire detection with a relatively good time and determination efficiency [46]. Hence, the basic idea is to use the YOLO algorithm in real-time with satellite hyperspectral sensors for fire detection.

6. Communications

A robust communication schema is envisioned and designed to facilitate seamless data exchange across all subsystems, as presented in Figure 7. The core of the system consists of a Ground Control Station, which is a centralised hub serving as the focal point for communication, managing the flow of information between operators and the rest of the system. In conjunction with the GCS, the system employs ad-hoc communication protocols to establish resilient connectivity between the GCS and multiple drones. This ad-hoc network enables dynamic, peer-to-peer communication, ensuring uninterrupted data transmission even in remote or challenging environments where traditional infrastructure may be absent. Moreover, the wildfire detection system integrates with third-party entities such as fire departments through proprietary APIs and data channels developed by governmental bodies, hence enhancing emergency response capabilities. Additionally, the system leverages data from IoT devices in the form of long-range thermal cameras deployed across monitored areas, utilising cellular networks to pass information back to the GCS.
Figure 7. Communication Hierarchy of the proposed system.
The GCS serves as the central entity for high-level mission planning and overall monitoring. IoT devices are accessible by anyone, while each UAV is equipped with its middleware located on the ground that enables ad hoc communication and decision-making, as well as lower-level control in the event of a GCS failure. The GCS manages task allocation, while path generation and replanning, if necessary, are handled by the UAV middleware. If UAVs lose communication with the remote controller, they continue data acquisition during a grace period by leveraging their onboard mission computer. The UAVs will return to their takeoff points if the issue remains unresolved after this period. The GCS is notified of such failures and reallocates the unmonitored areas to other agents. Additionally, when a new agent is added to the system, the GCS can issue on-the-fly task reallocation within a designated area to reduce revisit time. In the proposed multi-agent system for wildfire detection, communication poses significant challenges that must be addressed to ensure effective coordination and cooperation among various components. The GCS, IoT devices, and UAV middleware can be designed as independent microservices, each responsible for specific functions such as mission planning, real-time monitoring, and lower-level control. These microservices communicate asynchronously through an event-based architecture, enabling efficient and resilient data exchange. Events such as UAV status updates, task completions, or communication failures trigger specific microservices to take appropriate actions without direct dependencies, ensuring that the system remains responsive and adaptable. For instance, an event is generated if a UAV loses connection with the GCS, prompting the middleware to switch to onboard decision-making mode. Simultaneously, the GCS microservice receives the event, reallocates tasks, and updates other UAVs. This architecture allows the system to scale dynamically, as new UAVs can be integrated seamlessly with real-time task reallocation, ensuring continuous and coordinated wildfire monitoring and detection. Additionally, the constellation of satellites located and collecting data from Low Earth Orbit (LEO), being equipped with Inter-Satellite Links (ISL), is emerging as a key solution for real-time fire data transmission. As mentioned above, a constellation of satellites such as that of a significant number of satellites in orbit near the Earth’s surface facilitates the rapid collection and dissemination of data. Equipped with the appropriate sensors, they are able to monitor and collect data on forests and large fires, offering important information. The integration of the Inter-Satellite Links system in the satellite constellation ensures continuous communication between satellites and data exchange, thus improving data transmission efficiency even in isolated areas. This innovative technology will enable both the detection of fires through the coordination of systems and the rapid transmission of data to ground stations, ultimately helping to mitigate the devastating effects of fires. In a constellation with ISL, the system works cumulatively. Whichever satellite detects an anomaly, the other system’s satellites can acquire the information through the ISL without having direct access to the region, reducing the limitations caused by sensor FOV and maximising data return. Likewise, a high-risk area can be monitored by the passing satellite even with off-nadir pointing when nadir pointing is not an available option, and the generated data can be sent to the closest to the ground station satellite in order to transmit the results. This multi-tiered communication architecture aims to leverage the strengths of each individual system and, as a result, generate optimal situational awareness information, enabling proactive measures to mitigate wildfire risks and safeguard communities and ecosystems. 6.1. Interoperability across Heterogenous Communication Messages and Components The system comprises various components, including drones, satellites, and IoT devices, each with different communication protocols and data formats. This heterogeneity can complicate data exchange and integration. Standardising communication protocols and adopting interoperable data formats can mitigate these issues. Middleware solutions, such as message brokers or protocol translation layers, can facilitate seamless communication between heterogeneous agents. For example, using a common data interchange format like JSON or XML can ensure that messages are correctly interpreted by all agents in the system. Previous work from [47] describes how JSON is entirely language-neutral. The advantages of standardisation are related to the avoidance of needless overhead that comes with a high degree of flexibility and a lightweight framework. Because JSON is simply interpreted and does not require special handling, it may be utilized in heterogeneous compute nodes. In this way, it offers a productive approach to transferring different kinds of data and taking advantage of partial and dynamic reconfiguration in the field of distributed embedded systems. The end product is a highly adaptable and versatile communication framework with a uniform format; hence in cooperation with HTTP POST and GET requests, the system shall facilitate seamless data exchange. In addition, the authors in [48] built upon this knowledge and presented a new interoperability technology based on a distributed programming language (and its execution platform) that combines behaviour description (not just data) with platform independence and self-description capabilities, which are displayed by current data description languages; full separation of data and metadata (optimizing message transactions); and native support for binary data (removing the need for encoding or compression). Authors in [49] addressed the challenge of ensuring interoperability in heterogeneous multi-agent systems within the Industrial Internet of Things. Recognising the diverse tasks, control systems, and vendor origins of software agents, they emphasised the importance of interoperability for system functionality. They applied the Levels of Conceptual Interoperability Model to understand and measure interoperability issues systematically. By closely examining FIPA-ACL, they identified supported interoperability levels and uncovered certain shortcomings. To address these issues, they proposed the concept of interoperability rules, which enhance MAS interoperability. They also explained how these rules could be used for verification during engineering at the unit level and discussed their potential application for runtime verification and self-assessment. Another good example of a model-driven approach is presented by [50], which aims to reduce development efforts to ensure the interoperability of complex software systems. They introduced lightweight interoperability models designed to monitor and test the execution of running software, enabling the quick identification and resolution of interoperability issues. Additionally, they presented a graphical model editor and testing tool, demonstrating how visual models enhance textual specifications. Using case studies from the FIWARE Future Internet Service domain, the authors showed that their software framework can effectively support non-expert developers in addressing interoperability challenges. In addition, such systems need to be cyber-secure. The authors in [51] designed a method that requests an OAuth access token from the authorisation server when a client wants to use an earlier authentication and authorisation by combining JSON web token (JWT) with OAuth 2.0. The suggested scheme’s practical efficiency is confirmed by experimental assessment, which also shows that it employs signatures, eliminates the need to maintain refresh tokens, eliminates secure storage overhead, and avoids various security attacks—all of which are greatly wanted in healthcare services that use IOT cloud platforms. 6.2. Bandwidth Limitations One of the primary challenges in a multi-agent system is managing communication bandwidth. Given the high volume of data transmitted between drones, satellites, and the Ground Control Station (GCS), efficient bandwidth utilisation is crucial. Bandwidth limitations can lead to delays or loss of critical data, undermining the system’s effectiveness in early wildfire detection. To address this, adaptive bandwidth management techniques can be employed. These techniques prioritise data based on urgency and importance, ensuring that essential information is transmitted first. Furthermore, data compression algorithms can reduce the size of transmitted data, thereby conserving bandwidth. A group of mobile nodes without any fixed infrastructure makes up a wireless ad hoc network. Nodes must cooperate in dynamic and dispersed contexts where there is no central authorization facility. The plethora of devices connected to this system requires a network architecture in which direct communication between drones is possible while also maintaining a connection with the GCS. Ad hoc networks introduce fault tolerance by nature but also pose security risks. Authors in [52] propose a solution to avoid spoofing wherein the intermediate node asks its subsequent hop to send a confirmation message to the source in order to reinforce the accuracy of such a routing discovery procedure. After receiving a confirmation message and route reply, the source applies its policy to assess a path’s legitimacy. As a result, this tactic deters malevolent nodes from intercepting packets. Another great concern is signal reception and routing reconfiguration under rapid changes to the network. Authors in [53] investigate networking of unmanned aerial vehicles, or “flying ad hoc networks” (FANETs). The high-speed mobility, environmental factors, and topography structures of FANETs make the current mobile ad hoc routing methods inappropriate. They suggest a mixed omnidirectional and directional transmission strategy with dynamic angle adjustment to get around these challenges. Their suggested plan uses position and trajectory data to combine the hybrid usage of unicasting and geocasting routing. The resilience of their protocol is ensured by the use of 3-D estimates for intermediate node position prediction and directed transmission toward the expected site, which allows for a greater transmission range and enables the tracking of changing topology. While ad hoc networks are an effective, tested, and widespread solution, bandwidth limitations play a crucial role in the scalability of this system. Long-range communications will require additional hops, which further limits available bandwidth. The authors in [54] have analysed the required bandwidth for video streaming of raw and encoded footage as well as telemetry and C2-oriented messages. Starting with video streaming, they found that raw video streams are in the realm of hundreds of Mbps, while encoded 1080p 30fps video would require 6 Mbps. Lower resolution video feed can be delivered at 1–2 Mbps, albeit lower quality. C2-MAVLink control stream for telemetry and control required 0.4 Mbps, and the C2-Futaba SBUS protocol would require only 0.1 Mbps. Efficient communication methods have been proposed by authors in [55]. A two-tiered communication strategy is proposed to enhance multi-agent system architectures. This approach integrates the strengths of Agent Communication Languages (ACLs) with methods commonly used in the robotics community. Named “backchannels,” this extension has been implemented on the RETSINA MAS, which utilises the Knowledge Query and Manipulation Language (KQML). The authors detailed the backchannel extension and the necessary supporting network drivers and demonstrated significant analytical and experimental performance improvements achieved through back channels. They also discussed the successes of this approach in a search and rescue robot system, showcasing its practical effectiveness. 6.3. Real-Time Communication and Coordination Achieving real-time communication and coordination is paramount for the timely detection and response to wildfires. Delays in data transmission or processing can lead to significant consequences. To enhance real-time capabilities, edge computing can be employed. By processing data locally on drones or near the data source, the system can reduce latency and make faster decisions. As previously mentioned in [27], the authors developed a real-time image classification and fire segmentation model optimized for efficient functioning on UAVs, leveraging hardware acceleration with the Jetson Nano P3450 and the TensorRT deep-learning inference library. The research systematically explored the impact of activation functions, quantization techniques, and CUDA-accelerated optimizations on deep learning models for image classification, using a UAV-collected forest fire dataset. Their findings indicated that FP16 quantization significantly enhanced throughput and reduced latency, offering valuable insights for optimizing efficiency and accuracy in image fire-segmentation scenarios, with promising applications in drone deployments. While our system has multiple ways to perform data acquisition, only the UAVs are highly adaptable and can directly respond to information others gather. Building upon this, efficient coordination for UAVs is crucial; authors in [56] proposed a collaborative planning and control algorithm to enhance cooperation among composite teams of autonomous robots in dynamic environments. Their framework consists of a high-level multiagent state-action-reward-time-state-action algorithm within a partially observable semi-Markov decision process, enabling perception agents to learn surveillance in environments with unknown dynamic targets. Additionally, a low-level coordinated control and planning module ensures probabilistically guaranteed support for action agents. Authors in [57] proposed a predictive framework for multi-UAV cooperation in wildfire monitoring. Their approach allows UAVs to infer latent fire propagation dynamics, enabling time-extended coordination in safety-critical conditions with probabilistic performance guarantees. The framework includes novel analytical temporal and tracking-error bounds to optimize resource distribution and ensure comprehensive fire area coverage based on estimated states. This method, validated through simulation and physical multi-robot testbed demonstrations, showed significantly reduced tracking errors—7.5 times and 9.0 times smaller than state-of-the-art model-based and reinforcement learning benchmarks, respectively. Continuing with wildfire monitoring, authors in [58] presented a distributed control design for a team of UAVs to collaboratively track dynamic environments, specifically focusing on wildfire spreading. Their system enables UAVs to follow the expanding border of a wildfire while maintaining coverage of the entire affected area. Lastly, authors in [59] take into account firefighter safety by proposing an approach to estimate latent fire propagation dynamics and parameters using an adaptive extended Kalman filter, predictor, and the simplified FARSITE wildfire propagation model. This approach prioritizes firefighter safety by providing real-time information about propagating firefronts. They adapted vehicle routing literature to develop a distributed control system for track-based fire coverage. Additionally, they derived a mathematical observation model to map UAV sensor data from state space to observation space. These models were combined to create a dual-criteria objective function for controlling a fleet of UAVs specifically designed for wildfire monitoring. This objective function aims to minimize environmental uncertainty in local, human-centred areas and maximize coverage through ensemble-level formation control of the UAV network. 6.4. Dynamic and Adaptive Communication Mechanisms The system must adapt to changing environmental conditions and operational requirements. A dynamic communication mechanism that can adjust to varying network conditions and operational contexts is essential. Machine learning algorithms can be leveraged to predict network congestion and dynamically adjust communication strategies accordingly. For instance, reinforcement learning can help optimize routing paths and bandwidth allocation based on real-time network performance data. When it comes to reinforcement learning, which the system can leverage to enhance decision-making capabilities on the GCS and the UAVs, several studies have explored solutions to communication challenges in multi-agent systems. The efficient learned communication strategy proposed by [60], developed a novel end-to-end heterogeneous graph-attention architecture for multi-agent reinforcement learning. This architecture facilitates learning efficient, heterogeneous communication protocols among cooperating agents to accomplish shared tasks. They designed a differentiable encoder-decoder communication channel that enables agents to learn efficient binary representations of states, significantly improving cooperativity. Their binarized communication model achieved a 200× reduction in the number of communicated bits per round compared to baselines while also setting a new state-of-the-art in team performance. Additionally, they introduced the Multi-Agent Heterogeneous Actor-Critic (MAHAC) framework, which learns class-wise cooperation policies in composite robot teams, showing superior performance over a centralized critic with fewer model parameters. Empirical evidence demonstrated that their HetNet architecture is robust to varying bandwidth limitations and team compositions, achieving performance improvements between 8.1% and 434.7% over baselines across different domains. Moreover, authors [61] proposed a novel multi-agent reinforcement learning algorithm called Multi-Agent Graph-attention Communication (MAGIC). This algorithm features a graph-attention communication protocol that includes a Scheduler to determine when to communicate and whom to address messages to and a Message Processor using Graph Attention Networks (GATs) with dynamic graphs for handling communication signals. MAGIC demonstrated 27.4% more efficient communication on average compared to baselines, showed robustness to stochasticity, and scaled to larger state-action spaces. The effectiveness of MAGIC was further demonstrated on a physical multi-robot testbed. The Ad-hoc network we propose for parts of the system can be further enhanced by the work presented in [62]. The authors introduced a fault-tolerant ad hoc on-demand routing protocol (FT-AORP) designed to exploit the characteristics of Mobile Ad-hoc Network (MANET) nodes for reliable path determination in data transmission. FT-AORP leverages node characteristics to identify robust paths, utilizing duplicates of original data packets transmitted through two discovered paths to maximize fault tolerance. The performance of the proposed protocol was evaluated using the OMNeT++ network simulator across varying simulation parameters: number of network nodes, node speed, and data packet sending rate. Results from extensive simulation experiments demonstrated that FT-AORP significantly enhanced packet delivery ratios, reduced end-to-end delays, and maintained higher residual energy levels along transmission paths compared to baseline routing protocols. Lastly, authors [63] proposed a solution to adopt Named Data Networking (NDN) for communication specifically in the context of disaster relief. They introduced a proactive routing protocol that also supports reactive routing. In this approach, each node broadcasts its existence to neighbors and synchronizes the Network Information Base to update its Forwarding Information Base (FIB). A universal entry set in the FIB can probe potential paths, maintaining multiple next-hops for each prefix in the FIB to support multipath forwarding. By duplicating or splitting the queue of Interests at a node, the protocol can utilize multipath routes to enhance transmission reliability and efficiency.

7. Conclusions

This pape presented an autonomous and scalable monitoring system architecture for early detection and spread estimation of wildfires by leveraging low-cost UAVs, satellite data, and ground sensors. Existing approaches may have demonstrated efficacy in certain contexts but often lack integration and coordination, resulting in fragmented wildfire management strategies. Through this multi-domain approach and by leveraging the principles of a SoS architecture, we have identified each domain’s strengths and weaknesses, allowing for making use of the best assets each platform has to offer while mitigating their weaknessesthrough intelligent resource usage and data fusion. Starting from the GCS, we have identified ways to alleviate operator cognitive load as we see fit based on the literature. In addition, we have identified optimal communication approaches to enable scalability. The IoT sensors play a vital role in the system as they act as a low-maintenance and cost asset that is able to cover large areas once carefully placed. UAVs, through high levels of autonomy and the principles of swarming, will provide on-demand area coverage to identify wildfires as early as possible and real-time feedback as the wildfire progresses. Lastly, the constellation of LEO satellites will enable communication exchange beyond the visual line of sight, and through their finer ground sampling distance and hyperspectral sensors, they can assess vegetation health, which results in proactive wildfire management strategies and post-fire rehabilitation efforts. As the world is currently confronting an unprecedented wildfire crisis, it is crucial to leverage new technologies that will aid in the early detection and as a result suppression of wildfires. Our future work involves turning this architecture into an operational system and leveraging it alongside existing solutions governmental bodies are working with to enhance the early detection of fires.


We would like to acknowledge and thank our supervisor and professor Vaios Lappas for his guidance and advice. We would also like to thank and show our gratitude to the two anonymous reviewers that gave us constructive feedback throughout the reviewing process. Last but not least, we would like to thank April the editorial assistance provided.

Author Contributions

Conceptualization, V.L., D.M., S.P. and P.K.; Methodology, S.P. and P.K.; Software, D.M.; Validation, D.M. and S.P.; Writing—Review & Editing, V.L., D.M., S.P. and P.K.

Ethics Statement

Not applicable.

Informed Consent Statement

Not applicable.


This research received no external funding.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.


MacCarthy J. The latest data confirms: forest fires are getting worse. World Resources Institute. Available online: (accessed on 15 March 2024).
Neary DG, Ryan KC, DeBano LF. Wildland Fire in Ecosystems: Effects of Fire on Soils and Water; General Technical Report RMRS-GTR-42-vol. 4; Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2005.
Yao H, Rongjun Q, Xiaoyu C. Unmanned Aerial Vehicle for Remote Sensing Applications—A Review. Remote Sens. 2019, 11, 1443. [Google Scholar]
Candrone. Drones for Wildfire Management. Available online: (accessed on 15 March 2024).
Tzoumas G, Lenka P, Lucio S, Charles S, Thomas R, Sabine H. Wildfire Detection in Large-Scale Environments Using Force-Based Control for Swarms of Uavs. Swarm Intell. 2022, 17, 89–115. [Google Scholar]
Radhika R, William WE, Fatemeh A, Ramon MR-O, Frank P, Scott CB. Survey of Inter-Satellite Communication for Small Satellite Systems: Physical Layer to Network Layer View. IEEE Commun. Surv. Tutor. 2016, 18, 2442–2473. [Google Scholar]
Agency ES. Worldview-3. Available online: (accessed on 7 April 2024).
OroraTech’s Global Wildfire Warning | InCubed. Available online: (accessed on 17 March 2024).
Gorbett GE, Brian JM, Wood CB, Dembsey NA. Use of Damage in Fire Investigation: A Review of Fire Patterns Analysis, Research and Future Direction. Fire Sci. Rev. 2015, 4, doi:10.1186/s40038-015-0008-4.
Filipponi F. Exploitation of Sentinel-2 Time Series to Map Burned Areas at the National Level: A Case Study on the 2017 Italy Wildfires. Remote Sens. 2019, 11, 622. [Google Scholar]
Afghah F, Abolfazl R, Jacob C, Jonathan A. Wildfire Monitoring in Remote Areas Using Autonomous Unmanned Aerial Vehicles. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019.
Merino L, Fernando C, Martínez-de-Dios J R, Iván M, Aníbal O. An Unmanned Aircraft System for Automatic Forest Fire Monitoring and Measurement. J. Intell. Robot. Syst. 2011, 65, 533–548. [Google Scholar]
Seraj E, Andrew S, Matthew G. Multi-Uav Planning for Cooperative Wildfire Coverage and Tracking with Quality-of-Service Guarantees. Auton. Agents Multi-Agent Syst. 2022, 36, doi:10.1007/s10458-022-09566-6.
Huy XP, Hung ML, David F-S, Matthew D. A Distributed Control Framework for a Team of Unmanned Aerial Vehicles for Dynamic Wildfire Tracking. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017.
Nguyen AQ, Nguyen HT, Tran VC, Huy XP, Pestana J. A Visual Real-Time Fire Detection Using Single Shot MultiBox Detector for UAV-Based Fire Surveillance. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021.
Dubey V, Prashant K, Naveen C. Forest Fire Detection System Using IOT and Artificial Neural Network. ICICC 2018, 323–337.
Fire Ignition Detection. Available online: (accessed on 20 March 2024).
Govil K, Morgan LW, Ball JT, Pennypacker CR. Preliminary Results from a Wildfire Detection System Using Deep Learning on Remote Camera Images.  Remote Sens. 2020, 12, 166. [Google Scholar]
Silvanet—Ai Wildfire Detection: Dryad Networks. Dryad. Available Online: (accessed on 11 April 2024).
Watteyne T. Burnmonitor: An Early Wildfire Detection IOT Solution. Inria. Available Online: (accessed on 11 April 2024).
Abdelmalek B, Hafed Z, Amine MT, Kechida A. A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms.  Signal Process 2021, 190, 108309. [Google Scholar]
Sahin Y, Ince T. Early Forest Fire Detection Using Radio-Acoustic Sounding System. Sensors 2009, 9, 1485–1498. [Google Scholar]
ByoungChul K, JunOh P, Nam J. Spatiotemporal bag-of-features for early wildfire smoke detection. Image Vis. Comput. 2013, 31, 786–795. [Google Scholar]
Labati RD, Genovese A, Piuri V, Scotti F. Wildfire Smoke Detection Using Computational Intelligence Techniques Enhanced With Synthetic Smoke Plume Generation. IEEE Trans. Syst. Man Cybern Syst. 2013, 43, 1003–1012. [Google Scholar]
Findlay M, Peaslee D, Stetter JR, Scott W, Andrew S. Distributed Sensors for Wildfire Early Warnings. J. Electrochem. Soc. 2022, 169, 020553. [Google Scholar]
Shamsoshoara A, Afghah F, Razi A, Zheng L, Fulé PZ, Blasch E. Aerial Imagery Pile Burn Detection Using Deep Learning: The Flame Dataset.  Comput. Netw. 2021, 193, 108001. [Google Scholar]
BrileyA, Afghah F. Hardware Acceleration for Real-Time Wildfire Detection Onboard Drone Networks. arXiv preprint arXiv:2401.08105 2024.
Ahmed MS, Mahmood SA. An Edge Computing Environment for Early Wildfire Detection.  Ann. Emerg. Technol. Comput. 2022, 6, 56–68. [Google Scholar]
Sidhant G, Shagill MD, Arwinder D, Harpreet V, Ashima S. A Yolo Based Technique for Early Forest Fire Detection. Int. J. Innov. Technol. Explor. Eng. 2020, 9, 1357–1362. [Google Scholar]
Seon HO, Sang WG, Soon KJ, Geon-Woo K. Early Wildfire Detection Using Convolutional Neural Network. Commun. Comput. Inf. Sci. 2020, 18–30, doi:10.1007/978-981-15-4818-5_2.
Kaitlin H, Alejandro S. Value and benefits of model-based systems engineering (MBSE): Evidence from the literature. Syst. Eng. 2021, 24, 51–66. [Google Scholar]
Spiller D, Ansalone L, Amici S, Piscini A, Mathieu PP. Analysis and Detection of Wildfires by Using Prisma Hyperspectral Imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 215–222, doi:10.5194/isprs-archives-xliii-b3-2021-215-2021.
Thangavel K, Spiller D, Sabatini R, Marzocca P, Esposito M. Near real-time wildfire management using Distributed Satellite System. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar]
Bürkle A, Segor F, Kollmann M, Schönbein R. Universal ground control station for heterogeneous sensors. J. Adv. Telecomm. 2011, 3, 152–161. [Google Scholar]
Gregorio MD, Romano M, Sebillo M, Vitiello G, Vozella A. Improving Human Ground Control Performance in Unmanned Aerial Systems. Future Int. 2021, 13, 188. [Google Scholar]
Maza I, Caballero F, Molina R, Peña N, Ollero A. Multimodal Interface Technologies for UAV Ground Control Stations. J. Intell. Robot. Syst. 2010, 57, 371–391. [Google Scholar]
Yang J, You X, Wu G, Hassan MM, Almogren AS, Guna J. Application of reinforcement learning in UAV cluster task scheduling. Future Gener. Comput. Syst. 2019, 95, 140–148. [Google Scholar]
Gessesse AA, Melesse AM. Temporal Relationships Between Time Series CHIRPS-rainfall Estimation and eMODIS-NDVI Satellite Images in Amhara Region, Ethiopia; Elsevier: Amsterdam, the Netherlands; 2019; pp. 81–92.
NCJRS Virtual Library. Colors of Smoke and Flame | Office of Justice Programs. Available online: (accessed on 12 April 2024).
Joseph R, Santosh D, Ross G, Ali F. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016.
Shaik RU, Laneve G, Fusilli L. An Automatic Procedure for Forest Fire Fuel Mapping Using Hyperspectral (Prisma) Imagery: A Semi-Supervised Classification Approach. Remote Sens. 2022, 14, 1264. [Google Scholar]
Thangavel K, Spiller D, Sabatini R, Amici S, Sasidharan ST, Fayek H, et al. Autonomous Satellite Wildfire Detection Using Hyperspectral Imagery and Neural Networks: A Case Study on Australian Wildfire. Remote Sens. 2023, 15, 720. [Google Scholar]
Use Cases. NEPHOS EKT, 2024. Available online: (accessed on 17 March 2024).
Atik ME, Duran Z, Özgünlük R. Comparison of yolo versions for object detection from aerial images. Int. J. Environ. Geoinform. 2022, 9, 87–93. [Google Scholar]
Terven J, Córdova-Esparza DM, Romero-González JA. A comprehensive review of YOLO architectures in Computer Vision: From YOLOV1 to Yolov8 and Yolo-Nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar]
Pei S, Lu J, Wang Q, Zhang Y, Liang K, Kan X. An Efficient Forest Fire Detection Algorithm Using Improved Yolov5. Forests 2023, 14, 2440. [Google Scholar]
Delgado J. Service interoperability in the internet of things. Studies Comput. Intell. 2013, 51–87.
Wehner P, Piberger C, Gohringer D. Using JSON to manage communication between services in the Internet of Things. In Proceedings of the 2014 9th International Symposium on Reconfigurable and Communication-Centric Systems-on-Chip (ReCoSoC), Montpellier, France, 26–28 May 2014. pp. 1–4.
Wassermann E, Fay A. Interoperability rules for heterogenous multi-agent systems: Levels of conceptual interoperability model applied for multi-agent systems. In 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), Emden, Germany, 24–26 July 2017. pp. 89–95.
Grace P, Pickering B, Surridge M. Model-driven interoperability: engineering heterogeneous IoT systems. Ann. Telecommun. 2016, 71, 141–150. [Google Scholar]
Solapurkar P. Building secure healthcare services using OAuth 2.0 and JSON web token in IOT cloud scenario. In Proceedings of the 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), Greater Noida, India, 14–17 December 2016. pp. 99–104.
Lee S, Han B Shin M. Robust routing in wireless ad hoc networks. In Proceedings of the International Conference on Parallel Processing Workshop, Vancouver, BC, Canada, 21 August 2002, pp. 73–78, doi: 10.1109/ICPPW.2002.1039714.
Gankhuyag G, Shrestha AP, Yoo SJ. Robust and reliable predictive routing strategy for flying Ad-Hoc networks. IEEE Access 2017, 5, 643–654. [Google Scholar]
Poleo KK, Crowther WJ, Barnes M. Estimating the impact of drone-based inspection on the Levelised Cost of electricity for offshore wind farms. Results Eng. 2021, 9, 100201. [Google Scholar]
Berna-Koes M, Nourbakhsh I, Sycara K. Communication efficiency in multi-agent systems. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004. Volume 3, pp. 2129–2134.
Seraj E, Letian C, Gombolay MC. A hierarchical coordination framework for joint perception-action tasks in composite robot teams. IEEE Trans. Robot. 2021, 38, 139–158. [Google Scholar]
Seraj E, Silva A, Gombolay M. Multi-UAV planning for cooperative wildfire coverage and tracking with quality-of-service guarantees. Auton. Agents Multi-Agent Syst. 2022, 36, 39. [Google Scholar]
Pham HX, La HM, Feil-Seifer D, Deans MC. A distributed control framework of multiple unmanned aerial vehicles for dynamic wildfire tracking. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1537–1548. [Google Scholar]
Seraj E, Gombolay M. Coordinated control of uavs for human-centered active sensing of wildfires. In Proceedings of the American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020.
Seraj E, Wang Z, Paleja R, Martin D, Sklar M, Patel A, et al. Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, New Zealand, 9–13 May 2022.
Niu Y, Paleja R, Gombolay M. Multi-Agent GraphAttention Communication and Teaming. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Virtual Event, UK, 3–7 May 2021. pp. 964–973.
Hoang DNM, Rhee JM, Park SY. Fault-Tolerant ad hoc On-Demand routing protocol for mobile ad hoc networks. IEEE Access 2022, 10, 111337–111350. [Google Scholar]
Jin Y, Tan X, Feng W, Lv J, Tuerxun A, Wang K. MANET for Disaster Relief based on NDN. In Proceedings of the 2018  1st IEEE International Conference on Hot Information-Centric Networking (HotICN), Shenzhen, China, 15–17 August 2018. pp. 147–153.