1. Introduction
A geographic area can benefit significantly from UAVs outfitted with optical and thermal sensors, advanced image processing, and computer vision algorithms designed to detect suspicious activities or mitigate fire spread, apart from civilian use [
1]. Considering an approximate flight altitude of 2 km, the fire and human detection algorithms primarily emphasize point detection [
2]. Thermal images play a crucial role in enhancing accuracy. Factors such as motion patterns, size, and shape of shadows are considered in human identification. For fire detection, point detectors are complemented with color-based descriptors applied to thermal and optical images [
3]. Human detection, a more intricate process, demands sophisticated algorithms due to the challenges of high flight altitudes [
4]. In images captured at significant heights, people may appear as small dots spanning just a few pixels, necessitating research efforts focused on point detectors for human identification. Shadows, motion detection, and object tracking are deployed to ascertain whether these identified areas of interest represent human presence [
5].
The objective of this paper is to design a quadcopter equipped with an infrared camera and a Passive Infrared (PIR) sensor to enhance fire and temperature detection capabilities. The study aims to investigate the integration of these advanced sensing technologies into a UAV platform, evaluating their effectiveness in early fire detection and temperature monitoring. The primary focus is developing a robust and efficient system that can autonomously navigate environments, identify potential fire hazards, and provide real-time data by integrating infrared imagery and PIR sensor readings. Additionally, the research seeks to contribute insights into the practical applications of this technology, addressing challenges and proposing potential improvements for future implementations in the fire safety and environmental monitoring field.
While the current work aligns with these themes, its primary innovation lies in its applicability to high altitudes and vertical shooting images. This distinguishes it from similar studies that predominantly consider low altitudes and varied shooting angles [
6]. A noteworthy aspect of this endeavor is optimizing algorithm speed to facilitate real-time utilization during drone flights.
2. Related Literature
2.1. Need for Fire Detection
Human observers, employing their visual capabilities, remain the primary detectors of forest fires. Both traditional fire towers and airborne fire detection methods rely on human vision. The effectiveness of human-independent vision is constrained by factors such as observer understanding, sensitivity, attention, target characteristics, and environmental factors like haze and occlusion [
7]. In addition to direct observation, human observers can augment their capabilities using various auxiliary tools and instruments. These tools enhance observer understanding and sensitivity, e.g., in low light or insensitive spectral regions, or provide alternative perspectives, such as viewing remote camera images. Despite advancements in technology, human involvement in fire detection applications remains crucial. Recent reviews of Video Fire Detection (VFD) technology have concluded that achieving a completely reliable VFD system with human intervention is possible [
8]. Despite this, automated analysis and semi-automatic detection technologies contribute to identifying potential fires, managing chaotic situations, and sustaining vigilance. While human operators are still key, computerized systems highlight potential fire incidents, aid chaos management, and maintain overall alertness [
9].
Daytime aerial remote sensing, primarily centered around smoke detection, poses a unique challenge for human observers. The ability of smoke to rise above the tree canopy offers a distinctive signal for long-distance detection. On clear days, this can extend up to 20–25 km, enabling the identification of even small smoke sources. Initial analyses underscore the pivotal role of the angle of view (size) of the smoke, with aircraft altitude providing detection advantages [
10]. The altitude influences the adequate size of the smoke distribution, such as an increase in height leading to an augmentation in effective size. These analyses also consider environmental factors like distance, altitude, topography, haze, background, wind, clouds, the angular position of the sun, and the time of day [
11]. The characteristics of the smoke column, including movement, size, shape, and density, are assessed, along with observer factors such as search procedures, experience, attention, and vision. Real-time smoke detection from drones or recorded video is a viable option for daytime detection. However, video smoke detection faces challenges due to apparent slow motion over long distances, variations in smoke and lighting characteristics, and inconsistent image quality [
12]. Automated wildfire smoke detection systems rely on a combination of smoke-related characteristics, encompassing color, spatial and temporal correlation, and slow motion.
Tower-based fire point locations can be monitored through mountaintop towers equipped with mobile cameras and zoom functions. While these technologies have the potential for drone-based smoke detection, comprehensive implementation has yet to be achieved. Drone-based sensors often rely on visual flame detection or thermal imaging cameras for mapping the surrounding area [
13]. Nighttime optical or video wildfire detection proves challenging due to the canopy obstructing smoke visibility. Detection methods typically involve bright object identification, often utilizing low-light visible light or Near Infrared (NIR) cameras to aid human observers in detecting and locating points of interest. The main challenge in night vision inspection is the low light level, which compromises safety and hinders object detection and general observation [
14].
For decades, military night operations have hinged on the reliable use of Night Vision Goggles (NVG) to extend and fortify operational capabilities. Image-enhanced night vision equipment, a technology that amplifies ambient visible light and NIR illumination, offers a substantial advantage over unmanned vision for nocturnal search tasks. This equipment’s heightened level of detail and clarity enhances flight safety and facilitates accomplishing visual functions that would otherwise be impossible. Moreover, NVGs can potentially revolutionize the early detection of wildfires from the air [
15]. This capability, in turn, has the potential to enhance firefighting efficiency while concurrently reducing associated costs significantly. NVGs in aerial firefighting operations can provide crucial visibility during nighttime conditions, enabling swift and precise responses to emerging wildfire threats [
16].
2.2. Detection Using UAV
Drones play an important tool in risk assessment and rescue operations. Drones are now essential for rapid on-site data collection to aid disaster management, such as autonomous mapping, monitoring, and deployment of flying robots. A frequently reported use case scenario in drone-based disaster management is post-earthquake building damage assessment, where the ability of drones to obtain on-site data processing into information such as area, the amount, rate, and type of damage will be better for rescue teams to know the safe path and potential damage due to secondary impact [
17]. The construction model of the site can be reconstructed using drone-based maps or Light Detection and Ranging (LIDAR) Point Cloud, which experts can use to identify complete, partial, and high-risk structures. This recognition can be further automated by unsupervised classification or change detection techniques when the front and back of the image are available. The highly accurate nature of the drone-derived surface model data can now be easily used for disaster monitoring and analysis at a significantly reduced cost [
18,
19]. Examples include its use in dynamic landslide monitoring and detection of changes in coastal facilities to help assess vulnerability caused by natural disasters such as tornadoes. The drone-derived surface model or Digital Elevation Model (DEM.) obtained using the drone platform can reach the centimeter level and be either georeferenced or co-registered, combined with image-based displacement analysis and interferometric synthetic aperture radar displacement analysis [
20].
2.3. Remote and Thermal Sensing
A thermal imaging system is an optical instrument that can generate a two-dimensional representation of the surrounding environment based on the intensity of incoming thermal radiation. These external sensors convert digitally encoded pixel arrays into images according to the camera’s perspective projection, which depends on the lens’s focal length. The digital image is generated by the camera sensor and is characterized by its focal length, spatial resolution, temporal resolution, and dynamic range [
21]. The spatial resolution is essentially related to the size of the focal plane matrix and defines the number of pixels in the image and the corresponding aspect ratio. The time resolution is associated with the device’s operating frequency, specifically, the frame rate at which the camera generates image data. The dynamic range corresponds to the range of intensities represented [
22]. In addition, the generated image depends on automatic gain, a fundamental aspect of thermal image coding. Most cameras use Automatic Gain Control (AGC), which automatically adjusts the gain of the signal to maintain a consistent output level, especially in situations where the input signal strength may vary. Since fire is a source of extreme heat, thermal imaging sensors have powerful infrared radiation emission capabilities, offering significant benefits for fire control applications. However, a data-driven algorithm must be employed to realize an autonomous system for early detection and monitoring independent of individuals within the circuit [
23]. A data-driven approach is crucial to understand the behavior of these sensors in the event of a fire. This section initially presents experimental examples of fire detection, and a situational awareness feature engineering method was devised for use at fire scenes.
2.4. Multispectral Imaging Using Sensor
Multispectral sensors with many narrow spectral bands have been shown to characterize vegetation types, health conditions, and functions [
24]. Compared to multispectral imagery, hyperspectral data is reported to perform better in simulating the chlorophyll content of vegetation. They can also be used to calculate narrow-band exponential stresses to simulate canopy temperature, carotenoids, fluorescence, plant diseases, crop growth period, plant conditions soil, net photosynthesis crop moisture, and other vegetation parameters [
25]. In precision agriculture applications, multispectral data with high spatial resolution is required, but this data cannot currently be obtained from satellite sensors. Multispectral sensors designed for standard aircraft are often expensive, and these aircraft need a lot of infrastructure, maintenance, and human resources. The UAV is a flexible and rapidly evolving remote sensing platform [
26,
27]. In terms of time flexibility and the ability to fly at very low altitudes, they provide a promising alternative to standard aircraft, resulting in very high spatial-resolution images. Small, low-cost drones are beneficial for managing the small individual spatial resolution of these UAV platforms, and the beneficial sensors have stimulated many remote sensing research and the development of new systems [
28,
29,
30]. Recently, multispectral sensors have been developed for small drones. However, when considering the total system requirements, multispectral imaging systems are often too expensive and complicated for small application companies or individual farmers with no experience in aviation and remote sensing systems. In addition to the sensors, payload requirements generally include accurate Global Position System (GPS) or Inertial Measurement Unit (IMU) instruments and onboard computers for efficient data collection. Combined with other sensors like standard visible imaging sensors (RGB Color cameras), these onboard larger and more expensive drone rigs, increasing system costs. Furthermore, the multispectral sensor collects data in push sweep or broom mode, one image line at a time [
31]. Therefore, the geometry of the image is more affected by the rotation of the drone than the frame camera, which acquires the picture in a two-dimensional space at each exposure [
32]. Consequently, for hyperspectral sensors, post-acquisition data processing and the cost of correcting this progressive geometric distortion are crucial for a UAV-based monitoring system [
33,
34,
35,
36].
3. Photographic Operations
3.1. Photographic Methodologies
Unlike other ethereal photographic and satellite picture investigation occupations, this symbolism does not make it conceivable to characterize the highlight straightforwardly through visual assessment. To help comprehend the qualities available in the picture, the far-off detecting information should be arranged first, followed by various information upgrade methods. Such order is a powerful interaction requiring the careful approval of the preparation tests related to the characterization calculation utilized and gathering the methodologies into two groups.
• Techniques of Directed Order
• Techniques of Solo Grouping
The Directed Order model utilizes patterns from the arrangement. Examples of instruction are placed on the field with real-world data, reflecting what is observed there. The training regions’ spectral marks are utilized to search for distinguishable marks in the record’s excess pixels, which can be distinctly identified. This use of identification in processing images is called supervised classification. In this process, expert knowledge is particularly crucial because the selection of training images and biased decisions can significantly impact the accuracy of the classification. Maximum Likelihood Estimation (MLE) theory and Convolutional Neural Networks (CNNs) are standard methods [
37,
38]. The MLE theory estimates the probability of a pixel belonging to a class, i.e., a feature, and assigns it to its most probable class. To ascertain the most probable class, state-of-the-art CNN-based methods consider spatial proximity and the entire continuum.
No prior information is necessary to organize the attributes of the image in the case of a Solo Grouping classification. The typical pixel values, i.e., the pixel intensities, are observed. A model for determining the number of classes in the image is then specified. The higher the significance of the threshold, the more groups there would be. However, the same class can be expressed in multiple categories within a specific limit, illustrating how diversity is represented in the class. Ground truth validation is achieved after creating the clusters to define the class to which the image pixel belongs. Hence, this unsupervised classification does not require specific information about the groups.
3.2. Photographic Basics
Whether employing traditional film or contemporary digital cameras, photography fundamentals share core principles across various aspects. In this context, unsupervised grouping suggests a photographic method or technique that does not require specific prior knowledge or information about the subjects being photographed. This includes aspects such as focus, exposure settings, and filters [
39].
3.2.1. Focus
Focusing on a camera involves optimizing three critical parameters: the focal length of the camera lens, the distance between the lens and the object to be captured, and the separation between the lens and the image plane. Proper focus ensures the subject is sharp and well-defined in the resulting photograph.
3.2.2. Exposure Settings
Exposure encompasses the overall process of capturing an image, involving the amount of light that reaches the camera’s image sensor or film. Exposure is typically controlled by adjusting two primary factors: the aperture (which regulates the amount of light entering the camera) and the shutter speed (which determines the time the camera’s shutter remains open).
3.2.3. Filters
Filters are physical or digital accessories used in photography to modify or control the characteristics of light entering the camera. While filters influence exposure, filters are not a subset of exposure. Filters can be employed to manage specific aspects of light, such as reducing glare, adjusting color temperature, or enhancing contrast. They provide additional creative control over the visual outcome of the photograph.
3.3. Aerial Thermal Photography
For airborne photography [
40], a camera selected for the current paper has a high resolution of 7.2 MP and features autofocus capability. Particularly noteworthy is its ability to provide effective night vision support during the evening hours in the event of a fire. The camera shall establish a wireless connection via Wi-Fi to a designated smartphone, serving as an efficient transmitter for data. In the case of a reported fire at night, an Arduino sensor and temperature sensor will operate collaboratively. The Arduino system will activate an alarm if the detected temperature aligns with fire conditions.
3.4. Infrared and Thermal Imaging
The film-based, digital, and camcorder systems are simple infrared sensors capable of capturing imagery across three primary frequency bands. These instruments can be designed to collect information from various spectral groups, extending the scope of temperature detection (0.3 accuracy). This range encompasses Ultraviolet (UV), visible, NIR, mid-infrared, and thermal infrared. However, interpreting thermal images requires careful consideration, as fundamental thermal images can be visually analyzed, radiometrically adjusted, or prepared digitally. In essence, infrared sensors operating in the mid-IR to thermal IR range, whether from space platforms or airborne systems, follow similar principles [
41].
3.4.1. Across Track Scanning
An across-track, radiometrically calibrated broom scanner utilizes a rotating or oscillating mirror in such systems. It scans the area along scan lines that are at right angles to the flight line. This operation allows the scanner to repeatedly measure the energy from the front side of the aircraft to the other. The data collected within an arc beneath the plane is between the angles of 90° and 120°. As the aircraft progresses, numerous successive scan lines are obtained, resulting in adjacent or closely spaced narrow observation segments. This forms a two-dimensional image of lines (scan lines) and columns reflecting from the rotating or oscillating mirror. Subsequently, the incoming energy is separated into several spectral components that are detected independently. This system records both thermal and non-thermal frequencies. A dichroic prism separates the thermal and non-thermal components of energy. The non-thermal frequency segment is directed from the prism through a crystal (or optical phenomenon prism), dividing the energy into a spectrum of actinic radiation, visible, and near-infrared frequencies. Simultaneously, the dichroic prism disperses the thermal part of the incoming signal into its constituent frequencies. By placing an array of electro-optical detectors at precise geometric positions behind the prism and the crystal, the incoming beam is effectively “pulled apart” into various spectral bands. Each segment is measured independently, with each detector designed to have its peak spectral sensitivity in a specific frequency band.
3.4.2. Along Track Scanning
Along Track Scanners record multispectral image data on an area beneath an aircraft. Similar to advanced technology, they generate a two-dimensional image by capturing sequential scan lines oriented at right angles to the flight direction. However, there is a distinct difference between along-track and across-track systems, and it lies in how each scan line is recorded. In an along-track system, there is no scanning mirror; instead, a linear array of detectors is employed. These linear arrays typically consist of multiple Charge-Coupled Devices (CCDs) arranged end to end. Each detector segment captures the energy within a specific portion of the data column. A single detector’s Instantaneous Field of View (IFOV) determines the elements of the minimum resolution cell projected onto the ground. The Ground Sampling Distance (GSD) in the across-track direction is determined by the detector IFOV. In the along-track direction, the GSD is again set by the sampling interval used for the analog-to-digital signal conversion.
3.4.3. Thermal Imaging
A thermal imager is a specialized multispectral scanner designed with detectors that exclusively sense within the thermal part of the spectrum. These systems are not concerned with reflected radiation; therefore, they can operate during both day and night. Since our eyes are not sensitive to thermal radiation, there is no
natural way to perceive a thermal image visually. Regarding operational range, thermal imaging systems are typically limited to being usable in either (or both) the 3–5 µm or 8–14 µm range of frequencies. Quantum or photon detectors are commonly employed for this purpose. These detectors operate based on the principle of direct interaction between incident radiation photons and charge carriers’ energy levels within the detector material. These detectors must be cooled to very low temperatures to achieve optimal sensitivity and reduce thermal emissions. Often, the detector is enclosed in a “Dewar” containing Nitrogen at 77 K.
3.4.4. PIR Sensor
The PIR device features two openings, each crafted from a particular material sensitive to IR radiation. The lens utilized in this device has a minimal thickness, enabling both openings to see effectively over a significant distance, essentially encompassing the sensor’s sensitivity range. In its passive state, each opening consistently perceives a steady amount of IR radiation, typically from the surrounding environment, walls, or the outdoors. When a warm-bodied entity, such as a human or animal, moves within the field of view of one-half of the PIR device, it triggers a positive differential change between the two halves. The reverse occurs as the warm body exits the detection area, resulting in a negative differential change. These alteration pulses are distinctive signals that the device recognizes.
3.4.5. Radiometric Calibration of Thermal Images vs Temperature Mapping
There are several applications of thermal remote detection in which only relative variations in the radiance of diverse surface features need to be considered. In these cases, there is no need to measure surface temperatures precisely. However, it is essential to derive quantitative temperature measurements from the data in various situations. This necessitates a normalization technique where the electronic signals measured by the sensor’s detectors are converted into temperature units. There are multiple approaches to normalizing thermal sensors, each with its own level of accuracy and robustness. The normalization choice in any scenario depends on the available data acquisition and measurement instrumentation and the specific application’s requirements. In this discussion, it will focus on providing a general overview of the two most commonly used normalization methods. The internal black body reference is one approach, establishing an air-to-surface relationship. A significant distinction between the two methods is that the former does not account for atmospheric effects. Schott presents the different normalization methodologies available.
4. The Quadcopter Design
4.1. Drone Equipment
The quadcopter is equipped with sensors, a microprocessor (Raspberry Pi), and ArduPilot Mega (APM) for the flight controller. Five sensors are used in the system. The first is a temperature sensor used to measure the temperature in the monitored forest area. The temperature sensor employs a non-contact infrared sensor. The principle of the non-contact infrared sensor is similar to the traditional thermal imager, with a resolution of 2 × 2 pixels and an ArduPilot Mega of 5 degrees. Other sensors such as barometers, GPS, IMUs, and compasses are integrated into APM GPS, and the compass is used as the navigation system. The barometer measures the air pressure and is used as a reference. This maintains the altitude of the drone. The IMU consists of an accelerometer and gyroscope sensors that estimate the vehicle’s position. The Raspberry Pi 3, a mini processor, processes the temperature sensor and GPS data. The vehicle’s position is communicated to the ground station using a telemetry data link with a frequency of 433 MHz. The data are sent to the webserver to be accessed online in real-time. Data transmission uses the Transmission Control Protocol (TCP) system. The website’s information displays real-time data, including the number of access points, their coordinates, and the temperature of each access point. This website is integrated with Google Maps, making it easier for users to determine the location of hotspots.
4.2. Drone Flight Control
These are a couple of developments that help the drone to move according to the regulator’s desire. As per the task, the drone is needed to play out all essential moves like moving left and right, evolving height, pivoting around the hub, and having the option to adjust itself in shaky conditions [
42].
4.2.1. Roll
The drone’s lateral movement is finely tuned through a sophisticated control system. By modulating the power distribution among its engines and introducing subtle pressure differentials in the propellers, the drone achieves precise adjustments in its trajectory. This allows the drone to roll gracefully about its horizontal axis without altering the vertical position. The delicate interplay of forces and adjustments in propeller performance enables the operator to command the drone with finesse, imparting a subtle inclination to either side. This level of control enhances the drone’s maneuverability and provides a responsive and versatile platform for various applications, from aerial photography to surveillance.
4.2.2. Pitch
The drone’s vertical dynamics are precisely orchestrated to facilitate controlled ascents or descents, tailoring its elevation based on operational requirements. Achieving this involves strategically manipulating engine thrust and selectively amplifying the propulsive force to induce upward or downward movement. This dynamic adjustment allows the drone to ascend or descend in response to user commands, providing a seamless and responsive means of altering its altitude. The interplay of power distribution within the engines ensures reliable elevation control and a versatile platform for various applications, from capturing aerial perspectives to adapting to environmental conditions.
4.2.3. Yaw
The drone’s rotational agility is harnessed to execute controlled clockwise or anticlockwise pivots around its central axis without altering its altitude. This precision is achieved by manipulating the joystick on the controller, introducing a seamless integration of user input and drone response. When the joystick is rotated clockwise or anticlockwise, it dynamically adjusts the orientation of the drone’s upper section, inducing a controlled spin along its vertical axis. This intuitive control mechanism allows for precise aerial maneuvers. It ensures a user-friendly experience, making it accessible for various applications, from cinematography to search and rescue operations.
4.2.4. Thrust
The drone’s propulsion is finely tuned through a sophisticated control mechanism that regulates the power generated by its engines, adjusting the speed of the propellers to achieve a harmonized thrust. This level of control is facilitated by manipulating the joystick on the drone controller, where the push or pull on the joystick modulates the force applied to the engines. Pushing the joystick forward increases the propeller speed, generating more thrust for forward motion, while pulling it back has the opposite effect, slowing down the propellers to reduce speed or enable backward movement. This intuitive and responsive control system ensures dynamic and precise maneuverability. It allows the operator to seamlessly adapt the drone’s velocity to the demands of the environment or specific tasks at hand.
4.2.5. Trim
Trim settings represent controls designed to restore balance when the drone experiences deviations from stability. This is achieved by dedicating a specific channel on the remote control exclusively for trim adjustments, allowing operators to calibrate the force exerted by the engines and propellers finely. These adjustments become mainly instrumental when the drone encounters instability during flight. Operators can subtly and effectively counteract imbalances by utilizing the trim controls, ensuring the drone maintains a stable flight path. This meticulous approach to trim settings enhances the drone’s ability to navigate challenging conditions and provides a user-friendly means of optimizing performance for a more controlled and reliable aerial experience.
4.2.6. Vertical Take-Off and Landing
Vertical Take Off and Landing (VTOL) encapsulates the drone’s remarkable capability to ascend from and descend to the ground with precision. This is accomplished by manipulating the throttle lever to its maximum position and directing multiple engines to exert maximum force. The coordinated effort of these engines propels the UAV skyward, effecting a seamless and controlled ascent. This sophisticated VTOL mechanism facilitates the drone’s departure and arrival and ensures a smooth transition between hovering and upward movement. By strategically managing the power output of its engines, the drone achieves a dynamic and reliable vertical flight, showcasing its versatility across various operational scenarios.
4.3. Drone Modeling
CATIA Software plays a pivotal role in crafting the intricacies of the 3D model design in engineering. The drone’s dimensions in Figure 1 and the 3D CAD model views in Figure 2a–c are the foundations for in-depth stress and load-bearing capacity analysis for the real drone showing in Figure 3. The stress analysis was conducted in ANSYS, software known for its advanced capabilities to streamline the design process and ensure a user-friendly experience. The seamless compatibility between CATIA’s designs and the ANSYS software enhances the efficiency of stress and load assessments, allowing for a comprehensive evaluation of the modeled object’s structural integrity and performance characteristics. This integrated approach underscores the synergy between cutting-edge design and robust engineering analysis, ensuring a holistic and informed development process.
Figure 1. Dimensions of the drone.
Figure 2. Views of the drone. (a) Top View; (b) Side View; (c) 3D Projection View.
Figure 3. The Quadcopter real model.
4.4. Drone Stress Analysis
The ANSYS programming is employed to analyze the stress and load-bearing capacity of the drone. As listed in Table 1, the materials used in the quadcopter underwent analysis, and their mechanical properties were studied for importation as engineering data. After adjustments to the CAD model were made, the 3D structure was imported for analysis to ensure compatibility with ANSYS. Additionally, the materials were accurately assigned to each component to achieve precise body dimensions for the study. The loads acting on the structure were incorporated by adding forces to the UAV structural body. Gravitational forces were also considered to be acting on the body. Careful notation of the weights on each component and axis was carried out, culminating in the generation of a comprehensive final report. Component weights were incorporated as forces acting on the structure to simulate real-world conditions while considering gravitational loads. Throughout the analysis, attention was devoted to assessing stress levels on each element and node. This study was instrumental in comprehending the structural behavior, ensuring the UAV can confidently and reliably withstand diverse loading conditions. The insights derived from this investigation played a pivotal role in refining and optimizing the drone’s design, resulting in a robust, high-performance UAV tailored for efficient parcel delivery. Figure 4a,b illustrates the drones’ structure before and after deformation, and Figure 4c shows the generated mesh. The mesh was selected to simplify the analysis by breaking the structure into smaller elements of 50 mm.
The testing was finished by looking at all the essential moves and capacities of the UAV, such as rolling, pitching, and yawing. Working of GPS framework was additionally tried. The testing was finished by looking at between the ANSYS, and genuine payload estimation was done to achieve a much bearing limit of the UAV. Finally, the applications were tested during the application improvement testing and troubleshooting period to discover the bugs and general coding blunders.
■ Scenario 1: The weight must be equal to the thrust to have a successful flight;
■ Scenario 2: Thrust is higher than weight when going up, i.e., each motor has 20 N, and weight is 50 N. Weight is heading upward, and thrust is heading downwards, so the flight is stable;
■ Scenario 3: Scenarios 1 and 2 combined are the resulting analysis of the drone.
After observing the total deformations in various scenarios, it becomes evident that the stress distribution within the drone’s structure is not uniform. The resultant stress distribution patterns could be analyzed by subjecting the structure to the primary force of 50 N on all propellers, which is applied in a reversed direction to maintain static equilibrium. By mitigating stress concentrations, the drone’s components are less prone to localized areas of high stress, which could be critical in ensuring the longevity and reliability of the system during real-world delivery operations. In essence, these findings contribute valuable insights into the design optimization process. The knowledge gained from these stress analyses aids in fine-tuning the drone’s structure, ultimately leading to a more robust and efficient delivery system. This optimization approach further aligns with the core objective of making the quadcopter operation more seamless and optimized for customers, ensuring their satisfaction and trust in this innovative drone-based solution. Comparing or maximizing the efficiency of different devices is frequently used through modeling. Hence, research on validating the stress modeling in ANSYS continues [
43].
Figure 4. CAD modeling and Stress analysis of drone; (a) Undeformed; (b) Induced Stresses; (c) Deformed.
Table 1. Material Selection for Drone Components.
4.5. Drone Technical Specification
The quadcopter comprises the following components:
4.5.1. RC Transmitter and Receiver
The Radio Control (RC) transmitter and receiver form a crucial device in the operation of drones. This system utilizes radio signals to transmit and receive commands, enabling the control of various drone functions. As the centerpiece of any fully operational drone, these components coordinate various movements and controls. Different channels within the system are allocated for specific drone maneuvers, including rolling, pitching, yawing, and trimming. The RC transmitter and receiver are pivotal in ensuring precise and responsive control over the drone’s actions.
4.5.2. Battery
The 6200mAh 4s1P 14.8V Lithium-Polymer (LiPo) battery is a lightweight powerhouse, boasting a reduced weight compared to equivalent specifications in other brands. This battery, offering a stable voltage and an impressive 6000mAh charging capacity, ensures extended and reliable drone operations. A dedicated charger, included in the package, efficiently charges the battery. The battery is integrated into the drone through a purpose-built battery module, which interfaces with the flight controller. This robust setup ensures an extended flight time and a consistent and dependable power supply, enhancing the overall performance of the drone during flight operations.
4.5.3. Power Distribution Board
The Power Distribution Board (PDB) plays a pivotal role in the drone’s functionality by providing regulated voltage to each Electronic Speed Controller (ESC). It is a crucial safety feature, offering voltage protection during motor overheating. The board is equipped with surge protection to prevent overvoltage, safeguarding the drone from potential fire hazards. Additionally, it ensures a consistent and ample power supply for the flight controller, optimizing the drone’s performance. Moreover, the PDB is an intelligent component that alerts the user when the drone’s battery is running low. This timely warning enables the pilot to promptly initiate a safe landing, preventing potential damage to the drone and its surroundings. The PDB is a multifaceted component that contributes to the drone’s operation’s safety, efficiency, and overall reliability.
4.5.4. Arduino Nano
The Arduino Nano is one of the most economical and compact microcontrollers, offering easy programming and usage. Abundant free online user guides contribute to its accessibility, making it an ideal choice for beginners and seasoned enthusiasts. This versatile microcontroller integrates with the LM35 temperature sensor to accurately detect fire-relevant temperatures. By connecting this sensor, the Arduino Nano adds a sophisticated layer of temperature monitoring to the project, enhancing its capabilities. Powering the Arduino Nano is simplified through a bug boost connector, designed to operate efficiently even at low voltage levels. This feature ensures a reliable and stable performance, making the Arduino Nano a go-to choice for a wide range of applications where cost-effectiveness, compactness, and user-friendliness are paramount.
4.5.5. Camera
The camera is a 16 MP powerhouse equipped with a wide-range infrared lens, high-speed video recording capabilities, and the unique ability to capture images while syncing with a mobile phone. This camera delivers top-notch picture quality and boasts exceptional features while maintaining a lightweight profile, weighing a mere 72 g. This makes it an ideal companion for travel and outdoor activities, eliminating the need for cumbersome cameras. The technical specifications of the camera include:
• Model: MI 720;
• High resolution for crisp imaging;
• Plug-and-play functionality for user convenience;
• Built-in rechargeable battery for on-the-go usage;
• Lightweight design at only 72 g;
• Excellent night vision resolution;
• Supports built-in memory card for expanded storage;
• Wireless video transmission capability;
• Easily mountable for versatile use;
• CMOS (Complementary Metal Oxide Semiconductor Sensor);
• Infrared cameras are primarily used for enhanced imaging in various conditions;
• Dimensions: 60 mm × 42 mm × 21.2 mm.
In summary, this camera meets high-resolution imaging needs. It excels in portability, ease of use, and diverse application scenarios, making it an optimal choice for capturing moments on the go.
5. Discussion
The current paper introduces a novel technology utilizing standard color cameras mounted on consumer drones. Globally, the recovery of survivors in disaster-stricken areas is a critical and high-priority undertaking. This study shifts to drones capable of identifying the presence or absence of life signs from humans and human-shaped objects in various positions on the ground. The technology showcased in the paper demonstrates, for the first time, that standard color cameras on consumer drones can accurately detect life signs at distances ranging from 4 to 8 m with 100% precision under test conditions. The observed efficiency of the proposed system in identifying life signs from subjects in diverse positions highlights its potential for future deployment in search and rescue operations. Subsequent work will refine the system for practical scenarios and increase automation to enhance the technology’s operational usability. Developing a software simulator for a breathing human in a simulated complex environment is essential for achieving a reliable and thoroughly tested capability due to the many variables and scenarios involved.
6. Conclusions
A quadcopter emerges as a highly effective tool for forest fire detection, covering a radius of 10–15 feet through aerial surveillance. Augmented by a PIR sensor, it identifies the fire’s presence and gauges the temperature of surrounding humans and animals. The quadcopter seamlessly captures infrared images and videos that are directly accessible from a connected smartphone. The drone and camera are equipped with wireless connectivity, emphasizing the importance of well-established ground operations and communication for efficient ground-to-air control. Once the drone lands, Pixhawk software facilitates the compilation of comprehensive data. This collected information serves as a valuable asset for environmental monitoring and preserving human life, showcasing the potential of advanced technology in safeguarding our surroundings. In addition, there is a vision to expand and elevate this primary initiative, aiming to extend its scope through various future plans. These include integrating additional sensors for a more thorough analysis of fire-related incidents, developing autonomous navigation and decision-making algorithms, and exploring communication and coordination capabilities among multiple drones. The research suggests investigating advancements in energy efficiency, real-time data analysis, seamless integration with emergency response systems, and experimental data evaluation for enhanced system performance. Furthermore, the study can propose applications beyond fire detection, such as environmental monitoring and search and rescue missions, emphasizing the technology's adaptation to broader challenges. Additionally, attention is directed towards addressing regulatory compliance and ethical considerations, ensuring the responsible deployment of autonomous drones for enhanced safety and environmental monitoring. These plans aim to contribute to the designed quadcopter system's continuous evolution and practical applications in diverse and dynamic scenarios, supported by robust experimental evaluations of collected data.
Ethics Statement
Not Applicable.
Informed Consent Statement
Not Applicable.
Funding
The authors declare that this research did not receive any financial support.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in the paper.
References
1.
Agüera-Vega JR, Carvajal-Ramírez F, Marques da Silva F, Martínez-Carricondo P, Serrano J, Moral FJ. Evaluation of fire severity indices based on pre- and post-fire multispectral imagery sensed from UAV.
Remote Sens. 2019,
11, 993.
[Google Scholar]
2.
Yang T, Li D, Bai Y, Zhang F, Li S, Wang M, et al. Multiple-object-tracking algorithm based on dense trajectory voting in aerial videos.
Remote Sens. 2019,
11, 2278.
[Google Scholar]
3.
Yao H, Qin R, Chen X. Unmanned aerial vehicle for remote sensing applications - A review.
Remote Sens. 2019,
11, 1443.
[Google Scholar]
4.
Campbell JB, Wynne RH, Thomas VA. Introduction to Remote Sensing, 6th ed.; Guilford Press: New York, NY, USA, 2011.
5.
Tariq R, Rahim M, Aslam N, Bawany N, Faseeha U. DronAID: A smart human detection drone for rescue. In Proceedings of the 2018 15th International Conference on Smart Cities: Improving Quality of Life Using ICT & IoT (HONET-ICT), Islamabad, Pakistan, 8–10 October 2018; pp. 33–37.
6.
Nabeel MM, Al-Shammari S. Fire detection using unmanned aerial vehicle.
Al-Iraqia J. Sci. Eng. Res. 2023,
2, 47–56.
[Google Scholar]
7.
Boesch R. Thermal remote sensing with UAV-based workflows. In Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics, Bonn, Germany, 4–7 September 2017; pp. 41–46.
8.
Baena S, Moat J, Whaley O, Boyd DS. Identifying species from the air: UAVs and the very high-resolution challenge for plant conservation.
PLoS ONE 2017,
12, 0188714.
[Google Scholar]
9.
Schedl DC, Kurmi I, Bimber O. Search and rescue with airborne optical sectioning.
Nat.Mach. Intell. 2020,
2, 783–790.
[Google Scholar]
10.
Jónsson SB. RGB and multispectral UAV image classification of agricultural fields using a machine learning algorithm. Master’s Thesis, Lund University, Lund, Sweden, June 2018.
11.
Taha B, Shoufan A. Machine learning-based drone detection and classification: State-of-the-art in research.
IEEE Access 2019,
7, 138669–138682.
[Google Scholar]
12.
Shin J, Seo W, Kim T, Park J, Woo C. Using UAV multispectral images for classification of forest burn severity: A case study of the 2019 Gangneung forest fire.
Forests 2019,
10, 1025.
[Google Scholar]
13.
Auccahuasi W, Bernardo M, Núñez EO, Sernaque F, Castro P, Raymundo L. Analysis of chromatic characteristics, in satellite images for the classification of vegetation covers and deforested areas. In Proceedings of the 2018 2nd International Conference on Video and Image Processing, Hong Kong, China, 29 December 2018; pp. 134–139.
14.
Moafa A. Drones detection using smart sensors. Master’s Thesis, Embry-Riddle Aeronautical University, Daytona Beach, FL, USA, April 2020.
15.
Huang Y, Thomson SL, Stephan YM. Multispectral imaging systems for airborne remote sensing to support agricultural production management.
Int. J. Agric. Biol. Eng. 2010,
3, 50–62.
[Google Scholar]
16.
Dash JP, Watt MS, Pearse GD, Heaphy M, Dungey HS. Assessing very high-resolution UAV imagery for monitoring forest health during a simulated disease outbreak.
ISPRS J. Photogramm. Remote Sens. 2017,
131, 1–14.
[Google Scholar]
17.
Tauro F, Petroselli A, Arcangeletti E. Assessment of drone‐based surface flow observations.
Hydrol. Processes 2015,
30, 1114–1130.
[Google Scholar]
18.
Puri V, Nayyar A, Linesh R. Agriculture Drones: A modern breakthrough in precision agriculture.
J. Stat. Manag. Syst. 2017,
20, 507–518.
[Google Scholar]
19.
Giebel G, Paulsen SU, Bange J, la Cour-Harbo ARJ, Mayer S, van der Kroonenberg A, et al. Autonomous Aerial Sensors for Wind Power Meteorology-a pre-project; Aalborg University: Aalborg Øst, Denmark, 2012.
20.
Hentschke M, Pignaton de Freitas E, Hennig CH, Girardi da Veiga IC. Evaluation of altitude sensors for a crop spraying drone.
Drones 2018,
2, 25.
[Google Scholar]
21.
Kays R, Sheppard J, Mclean K, Welch C, Paunescu C, Wang V, et al. Hot Monkey, Cold reality: Surveying rainforest canopy mammals using drone-mounted thermal infrared sensors.
Int. J. Remote Sens. 2019,
40, 407–419.
[Google Scholar]
22.
Chen Z, Wang X, Liang R. RGB-NIR multispectral camera.
Optics Express 2014,
22, 4985–4994.
[Google Scholar]
23.
Brauers J, Aach T. A Color Filter Array Based Multispectral Camera; Workshop Farbbildverarbeitung: Ilmenau, Denmark, 2006.
24.
Tahar KN, Ahmad A, Akib WAAWM, Mohd WMNW. Aerial mapping using autonomous fixed-wing unmanned aerial vehicle. In Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, Malacca, Malaysia, 23–25 March 2012, pp. 164–168.
25.
Riley S. Capturing and analyzing multispectral UAV imagery to delineate submerged aquatic vegetation on a small urban stream. Master’s Thesis, Syracuse University, 6 June 2019.
26.
Assmann JJ, Kerby JT, Cunliffe AM, Myers-Smith IH. Vegetation monitoring using multispectral sensors - Best practices and lessons learned from high latitudes.
J. Unmanned Vehicle Syst. 2018,
7, 54–75.
[Google Scholar]
30.
Harvey MC, Rowland JV, Luketina KM. Drone with thermal infrared camera provides high-resolution georeferenced imagery of the Waikite geothermal area, New Zealand.
J. Volcanol. Geotherm. Res. 2016,
325, 61–69.
[Google Scholar]
31.
He X, Liu Y, Kumar A, Arman B, Paul E, Fatima S, et al. A Single Sensor-Based Multispectral imaging camera using a narrow spectral band color mosaic integrated on the monochrome CMOS image sensor.
APL Photonics 2020,
5, 046104.
[Google Scholar]
32.
Farlik J, Kratky M, Casar J and Stary V. Multispectral detection of commercial unmanned aerial vehicles.
Sensors 2019,
19, 1517.
[Google Scholar]
33.
Xiang TZ, Xia GS, Zhang L. Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects.
IEEE Geosci. Remote Sens. Mag. 2019,
7, 29–63.
[Google Scholar]
34.
Maddikunta PKR, Hakak S, Alazab M, Bhattacharya S, Gadekallu TR, Khan WZ, et al. Unmanned aerial vehicles in smart agriculture: Applications, requirements, and challenges.
IEEE Sensors J. 2021,
21, 17608–17619.
[Google Scholar]
35.
Fernando HCTE, De Silva ATA, De Zoysa MDC, Dilshan KADC, Munasinghe SR. Modelling, simulation, and implementation of a quadrotor UAV. In Proceedings of the 2013 IEEE 8th International Conference on Industrial and Information Systems, Peradeniya, Sri Lanka, 17–20 December 2013; pp. 207–212.
36.
Schmidt MD. Simulation and control of a quadrotor unmanned aerial vehicle. Master’s Thesis, University of Kentucky, Lexington, KY, USA, April 2018.
37.
Brauers J, Aach T. Geometric calibration of lens and filter distortions for multispectral filter-wheel Cameras.
IEEE Trans. Image Process. 2010,
20, 496–505.
[Google Scholar]
38.
Myung IJ. Tutorial on maximum likelihood estimation.
J. Math. Psychol. 2003,
47, 90–100.
[Google Scholar]
39.
Zhang X, Zhang X, Wang W. Convolutional neural network. In Intelligent Information Processing with Matlab; Springer: Singapore, 2023.
40.
Hassan H, Rahman SA. Integration of aerial photography, Airborne LiDAR, and Airborne IFSAR for Mapping in Malaysia.
IOP Conf. Ser. Earth Environ. Sci. 2021,
767, 012020.
[Google Scholar]
41.
Prakel. Basic Photography 01: Composition, 2nd ed.; AVA Publishing: Lausane, Switzerland, 2012.
42.
Aidil M, Panjaitan SN, Yacoub RR. Design and development of flight controller for quadcopter drone control.
Telecommun. Comput. Electr. Eng. J. 2024,
1, 279–291.
[Google Scholar]
43.
Arora S, Ntantis EL. Customization and payload integration of hexacopter for enhanced grocery delivery.
Multidiscip. Sci. J. 2024,
6, 2024126.
[Google Scholar]