Harsh industrial environments such as oilfields present unique challenges to electronic systems, including extreme temperatures, limited connectivity, power constraints, and operational unpredictability. Traditional Internet of Things (IoT) deployments often fail to adapt in real-time, exposing systems to risks such as data loss, late anomaly detection, or critical failure. This paper proposes a lightweight, Artificial Intelligence (AI)-driven eSystem architecture tailored for such conditions, integrating edge intelligence, secure communication, and self-adaptive mechanisms. We demonstrate the framework's viability through simulating a case study of real-time sensor data from pipeline infrastructure, applying a Long Short-Term Memory (LSTM)-based anomaly detection model deployed at the edge. Results show significant improvements in detection latency, bandwidth efficiency, and system resilience. The framework offers a modular blueprint for deploying AI-enhanced eSystems across energy, mining, and remote critical infrastructure domains.
This research introduces a deep learning-based framework for anomaly detection in wireless communication networks using Channel State Information (CSI)—a fine-grained physical-layer signal that captures wireless channel dynamics. Traditional detection methods often fall short in identifying subtle or evolving threats, whereas CSI provides a rich, underutilized source for context-aware monitoring. Inspired by its use in human activity recognition, we apply and compare deep learning architectures such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs), and Transformers to learn normal network behavior and detect anomalies, including spoofing, jamming, rogue access points, environmental disruptions, and Quality of Service (QoS) degradation. The system supports supervised, semi-supervised, and unsupervised settings, accommodating scenarios with limited labeled data. CSI data is collected using tools like the Intel 5300 NIC and Nexmon CSI under both controlled and realistic conditions. We benchmark our models against traditional techniques (e.g., Isolation Forests, Support Vector Machines (SVMs), Principal Component Analysis (PCA)), evaluating accuracy, false positives, latency, and robustness. To enhance transparency, we employ interpretability methods such as Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distributed Stochastic Neighbor Embedding (t-SNE). Experimental results show that deep learning models outperform classical baselines by up to 30% in detection accuracy. The Transformer architecture achieved 96.2% accuracy with a false positive rate of 3.9%, while the CNN-LSTM hybrid achieved the best latency–performance tradeoff (5.1ms inference). Compared to Isolation Forest and One-Class SVM, our framework reduced false positives by over 10–14%.