×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerIn the period of digital transformation, oil companies have to cope with management of huge volume staff data from different parts company everything from management, maintenance, engineering and geology to drill teams at heart workers front line standpoint. This research presents a complete study of big data accuracy and classification improvement in K-means Clustering Learning (KCL) management for 20,000 employees in an oil company. Data were auto-generated according to global standards and technical specifications. The data tables for formwork of human resources bars were 90% prepared by file laziness. In fact, the test kernel used in this research is also based on this data. The study focuses on important problems of work such as raising data quality and classification of employees according to various factors including practical experience, education levels technical expertise, competence achieved in performance evaluations (which may change over time) or safety training hours. Our methodology incorporates advanced preprocessing techniques, feature engineering and hyper parameter optimization in order to achieve better classification accuracy. The experimental results show that the optimized KNN algorithm is capable of 94.2 percent accuracy for employee classification, which represents a significant bat improvement over the traditional method. This research offers practical lessons for oil companies employing machine learning techniques in human resources management and improved operational efficiency learning and operational efficiency.
High utility pattern mining (HUPM) is one of the key areas in data mining, which is concerned with identifying patterns with high utility from transactional databases. The temporal factors such as periodicity and recency along with dynamic variations in profits have recently been added to pattern mining. However, no methods so far unify these dimensions in a common framework. To this end, in this paper we propose the DTU-Miner algorithm that integrates temporal constraints and dynamic profit updates to overcome such limitations. Through the use of advanced data structures such as UPR-List and P-set and the introduction of some novel pruning strategies, DTU-Miner surpasses state of the art in terms of Runtime, Memory and pattern quality. Results on benchmark datasets show that DTU-Miner outperforms state-of-the-art algorithms, CPR-Miner and iEFIM-Closed, which suggests the effectiveness of DTU-Miner over dense and sparse datasets including dynamic attributes.
The rapid development of the Internet of Things (IoT) has drawn significant attention from both industry and academia, driven by the integration of cloud computing, big data analytics, machine learning, and cyber-physical systems in manufacturing. Programmable Logic Controllers (PLCs), long central to industrial control systems, have evolved from basic feedback control devices to advanced components capable of networking and data exchange through IoT technologies. The Industrial Internet of Things (IIoT) refers to intelligent automation systems that continuously monitor critical parameters and respond to changes in real time. The integration of IoT with PLCs is transforming industrial automation by enabling remote real-time monitoring, data-driven decision-making, and predictive maintenance through advanced analytics. IIoT technologies enhance manufacturing performance and offer strategic value across sectors. Understanding their impact involves examining current research, including technology assessments and application-based case studies. This study provides an overview of PLC systems evolving into IIoT frameworks, with a focus on implementing proportional-integral (PI) control using the Siemens S7-300. Designed for precise and consistent temperature regulation, this approach enhances process efficiency and product quality, making it highly suitable for industrial and manufacturing environments.
Smart city applications demand lightweight, efficient and dependable communication protocols to facilitate the functioning of resource-limited Internet of Things (IoT) devices. This work performs an extensive empirical study of the three most prominent IoT standards; Message Queuing Telemetry Transport (MQTT), the Constrained Application Protocol (CoAP) and Hypertext Transfer Protocol (HTTP) by emulating real-world smart city use cases using a Raspberry Pi based testbed. The primary metrics based on which the protocols are analyzed are latency, message overhead, delivery rate and energy consumption. ANOVA and Tukey's HSD tests are used to determine the statistical significance of experimental data. The test results indicate that CoAP under (QoS-1 reliability) shows the least latency and energy consumption and MQTT due to its support for Quality of Service (QoS) is the most reliable. Among the others, HTTP is in general performance terms certainly at the bottom of all metrics mainly for its verbosity and synchronous nature. The paper then also suggests a decision flowchart for developers to choose the suitable protocol according to application requirements. The results are more than just numbers on a graph, and the research can be deployed for advice for protocol selection in practice, where this study helps identify issues with encryption overhead (over 75\%) while showcasing multi-hop network scalability and adaptive switch mechanisms as areas that remain to be resolved. Such findings can be used as a basis for design approaches to construct secure, efficient and scalable communication protocols in urban IoT settings.
This research introduces a deep learning-based framework for anomaly detection in wireless communication networks using Channel State Information (CSI)—a fine-grained physical-layer signal that captures wireless channel dynamics. Traditional detection methods often fall short in identifying subtle or evolving threats, whereas CSI provides a rich, underutilized source for context-aware monitoring. Inspired by its use in human activity recognition, we apply and compare deep learning architectures such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs), and Transformers to learn normal network behavior and detect anomalies, including spoofing, jamming, rogue access points, environmental disruptions, and Quality of Service (QoS) degradation. The system supports supervised, semi-supervised, and unsupervised settings, accommodating scenarios with limited labeled data. CSI data is collected using tools like the Intel 5300 NIC and Nexmon CSI under both controlled and realistic conditions. We benchmark our models against traditional techniques (e.g., Isolation Forests, Support Vector Machines (SVMs), Principal Component Analysis (PCA)), evaluating accuracy, false positives, latency, and robustness. To enhance transparency, we employ interpretability methods such as Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distributed Stochastic Neighbor Embedding (t-SNE). Experimental results show that deep learning models outperform classical baselines by up to 30% in detection accuracy. The Transformer architecture achieved 96.2% accuracy with a false positive rate of 3.9%, while the CNN-LSTM hybrid achieved the best latency–performance tradeoff (5.1ms inference). Compared to Isolation Forest and One-Class SVM, our framework reduced false positives by over 10–14%.