Vol. 2 No. 1 (2026)

Published June 1, 2026 Pages: 1-93
Download Full Issue (PDF)

Articles in This Issue

Abstract

Keeping IoT networks running reliably in harsh environments is still a tough problem. Sensors wear out, communication links are unreliable, and maintenance quickly becomes expensive. These issues make traditional monitoring approaches fragile and slow to react. This work presents a self-adaptive, AI-driven Digital Twin framework that continuously tracks the real state of an IoT network and flags failures before they actually happen. The system mirrors the physical network in real time by combining edge-level data preprocessing, physics-aware Digital Twin simulations, and well-chosen deep learning models for anomaly detection and remaining useful life estimation. To test the idea, we simulated a network of 50 IoT nodes operating under realistic harsh conditions, including thermal stress, high humidity, and signal interference. The results are hard to ignore. The proposed framework reached 91% prediction accuracy, detected problems 27 seconds earlier on average, and improved overall network reliability from 84% to 96% compared to standard threshold-based monitoring. The takeaway is straightforward: pairing AI analytics with Digital Twin technology enables proactive and resilient IoT operation in environments where conventional monitoring quickly falls apart. This work lays a practical foundation for deploying AI-enhanced Digital Twins in real-world, next-generation IoT systems, where reliability actually matters.

Abstract

In the period of digital transformation, oil companies have to cope with management of huge volume staff data from different parts company everything from management, maintenance, engineering and geology to drill teams at heart workers front line standpoint. This research presents a complete study of big data accuracy and classification improvement in K-means Clustering Learning (KCL) management for 20,000 employees in an oil company. Data were auto-generated according to global standards and technical specifications. The data tables for formwork of human resources bars were 90% prepared by file laziness. In fact, the test kernel used in this research is also based on this data. The study focuses on important problems of work such as raising data quality and classification of employees according to various factors including practical experience, education levels technical expertise, competence achieved in performance evaluations (which may change over time) or safety training hours. Our methodology incorporates advanced preprocessing techniques, feature engineering and hyper parameter optimization in order to achieve better classification accuracy. The experimental results show that the optimized KNN algorithm is capable of 94.2 percent accuracy for employee classification, which represents a significant bat improvement over the traditional method. This research offers practical lessons for oil companies employing machine learning techniques in human resources management and improved operational efficiency learning and operational efficiency.

Review Article
Towards Intelligent and Connected Urban Mobility: 5G and The Internet of Vehicles
PDF Full Text
Abstract

Internet of Things (IoT) technologies, particularly the Internet of Vehicles (IoV), have transformed transportation, enabling safer, more efficient, and intelligent mobility solutions. As mobile data and devices increase, cellular networks can support vehicular communication features for safety and non-safety purposes. This paper examines IoV integration with 5G communication technology in a smart city. With varying levels of vehicles numbers.5G efficiently supports internet vehicle communications with slicing technology offering a practical solution for IoV services. This research includes the description of the Internet of Vehicles with 5G system components. Covering the 5G with IoV in the smart cities framework for the development industry. Provide the simulation result for the IoV-5G proposed system. The results show that 5G-IoV outperforms IoV and LTE in every measured parameter, delivering up to 32% greater channel gain rate, about 65–70% lower network latency, and roughly 20–25% higher network transfer rate. The study examines and summarizes our simulation platform's performance. The analysis will be implemented by SUMO, Simu5G in the OMNeT++ simulation program.

Original Article
AI-Driven Threat Intelligence for IoT Networks: Leveraging Machine Learning for Enhanced Intrusion Detection
PDF Full Text
Abstract

As Internet of Things (IoT) devices continue to spread, they also create many new entry points for cyberattacks. Traditional security methods struggle to keep up, which makes smarter and more adaptive defenses necessary. This paper introduces an Artificial Intelligence (AI)–driven threat intelligence framework designed to improve intrusion detection in diverse IoT networks. The framework combines Machine Learning (ML) and Deep Learning (DL) models to detect malicious activity more accurately across different types of network traffic. To evaluate the approach, three widely used benchmark datasets—UNSW-NB15, CIC-IDS2017, and IoT-Botnet—were used. Experimental results show that the proposed hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) model performs very well. It achieved 97% accuracy, a 0.95 F1-score, and a 0.98 Receiver Operating Characteristic – Area Under the Curve (ROC-AUC) on the UNSW-NB15 dataset, outperforming traditional ML models such as Random Forest, which reached 94% accuracy. While DL models provided better detection performance and stronger generalization, ML models proved to be much faster, with nearly three times lower inference latency—about 3 milliseconds per network flow. This makes them more suitable for real-time deployment at the IoT edge, where computing resources are limited. Overall, the proposed hybrid approach strikes a practical balance between detection accuracy and processing speed, offering a scalable and robust foundation for AI-based IoT threat intelligence in real-world environments.

Abstract

Microwave photonic filters (MPFs) have been suggested as one solution to high-speed tunable wideband radio-frequency (RF) signal processing possessing unique characteristics relative to their all-electronic counterparts (or equivalents), both in bandwidth and tunability and insensitivity to electromagnetic interference. The article is a review of MPF design technologies and applications, and also contains relevant techniques in thermal, electrical and optical tuning as well as new methods founded on stimulated Brillouin scattering, optical frequency combs, and micro-ring resonators. The survey focuses on programmable optical processors, including liquid-crystal-on-silicon designs, arrayed waveguide gratings, and cascaded resonator designs, as arbitrary filters synthesis. Critical consideration is done on performance metrics which include bandwidth, selectivity, out-of-band rejection and tuning range, energy efficiency and other practical factors like stability in the environment and complexity of fabrication. The latest advances in reconfiguration with the help of artificial intelligence and machine learning are presented, and their significance in the optimization of adaptive and predictive filters is also highlighted. The paper also discusses the current constraints such as integration, power consumption, and environmental sensitivity and has provided directions of future achievability of compact, low-power and ultrafast and highly flexible MPFs to next-generation RF communication, radio-over-fiber, and cognitive radio systems. The survey should be used as a source of reference to the researchers and engineers who seek to improve the development, testing, and real-life application of the state of art technologies in the field of microwave photonic filtering.

Abstract

This paper introduces a new class of adaptive WCCI-based non-inverting step-down/step-up converter that integrates an active Ripple Suppression Engine (RSE) and a dynamic mode-transition controller to simultaneously enhance efficiency and minimize ripple across buck, boost, and buck–boost operating modes. Unlike conventional WCCI ZVT based step-down converters which operate in a single region and rely primarily on passive filtering, the proposed topology employs active current injection and ripple-sensing compensation to reshape the inductor-current waveform and attenuate switching-related conduction losses. With the aid of a dual-threshold window comparator and FSM-based logic, the converter achieves highly stable mode transitions free from ringing, overshoot, or mode oscillation. Simulation results validate the superior performance of the proposed architecture, demonstrating more than a 70% reduction in inductor-current ripple and nearly an 80% decrease in output-voltage ripple compared with the existing work. The converter also exhibits substantially improved transient behavior, achieving faster settling times and significantly lower voltage undershoot during load-step events, all while utilizing smaller passive components. Furthermore, the proposed scheme maintains high efficiency throughout the full 2.5V–8V input range, offering a robust and adaptable alternative to traditional WCCI-based implementations. These findings confirm the suitability of the proposed converter for compact, high-performance power-management applications.

Original Article
Robust Feedback Linearization Based PSO Algorithm for Overhead Crane System
PDF Full Text
Abstract

Overhead crane systems are found in many industrial environments; however, controlling their motion in the presence of nonlinear and underactuated dynamics considers as a big challenge. To address this, a nonlinear control method proposed to enable the trolley to track the desired trajectory and quickly eliminate swing. First, feedback linearization is applied to the crane dynamics. Next, an energy-based compensation is implemented to ensure the boundedness of the system trajectories. Then, Particle Swarm Optimization (PSO) is utilized to optimally tune the controller parameters. The optimization relies on a multi-objective cost function formulated to simultaneously minimize steady-state error and overshoot, while improving robustness against model uncertainties and external disturbances. Finally, the robustness and validity of the proposed control method are demonstrated through the simulation of an underactuated crane system in several cases, including reference tracking, robustness against system uncertainty and external disturbances. Simulation results illustrated that the presented control method has minimum rise time, settling time with respect other control methods with zero steady state error.

Review Article
A Narrative Review of AI and IoT-Based Systems for Child Fall Detection and Health Monitoring
PDF Full Text
Abstract

This narrative study provides an analytical and critical review of recent advancements (2019-2026) in the integration of IoT (Internet of Things) and AI (Artificial Intelligence) systems for fall detection and child health monitoring. Unlike prior studies, which concentrated on elderly care and monitoring, this study examines child-specific monitoring environments, including wearable, vision-based, and hybrid systems. It investigates new trends such as the combination of deep learning and interpretable AI with multimedia sensory input and peripheral or fuzzy computing. Data scarcity, real-world deployment limits, privacy concerns, and age-related changes are among the key challenges addressed. The paper identifies important research gaps and proposes future paths for sustainable, secure, and accessible intelligent child monitoring systems.

Abstract

Transient analysis of an electrical circuit is an important study in determining how electric circuits respond dynamically due to sudden changes, such as switching operations. The parallel RLC circuit has become crucial, both in academics and engineering fields, pertaining to designing various engineering devices: oscillators, filters, and dissipating energy that might be harmful to people or electronics. This paper examines the application of Second Derivative General Linear Methods (SGLMs) for solving the governing second-order ordinary differential equation of a damped parallel RLC circuit. SGLMs with second derivative have given with superior stability and accuracy for non-stiff and stiff systems. This study compared the given methods with other traditional methods under the conditions: as overdamped, critically damped, and underdamped. Results will find wide implications in design and optimization studies of electrical systems by providing a robust framework for accurate, efficient transient analysis. The results show that SGLMs achieved significantly lower absolute errors compared to classical methods. Specifically, the maximum error across all simulations was over 10 times smaller than that of RK4 (Runge-Kutta 4), and the Euler method exhibited even greater deviations. SGLMs remained stable even at larger step sizes (up to h=0.1)where the other methods either became unstable or lost accuracy.