Vol. 1 No. 2 (2025)

Published December 1, 2025
Download Full Issue (PDF)

Articles in This Issue

Original Article
A Hybrid Intrusion Detection Framework for CyberPhysical Security in Smart Home/Smart City IoT Systems
PDF Full Text
Abstract

The rapid expansion of smart home and smart city technologies has introduced a complex array of interconnected Internet of Things (IoT) devices, exposing both cyber and physical infrastructures to a growing spectrum of security threats. Traditional cybersecurity models are insufficient to address the dynamic and distributed nature of modern cyber-physical environments, particularly in emerging economies where standardized security frameworks are often lacking. This research proposes a unified, hybrid cyber-physical security framework tailored for smart home and smart city IoT systems. Leveraging publicly available datasets such as UNSW-NB15, TON_IoT, and CICIDS2019, we simulate various attack vectors and evaluate a multi-layered intrusion detection system (IDS) that combines both signature-based and anomaly-based machine learning models. The proposed framework is validated using simulated network topologies built with NS-3 and Cooja, focusing on performance metrics including detection accuracy, false-positive rate, and computational overhead. Results demonstrate that our hybrid approach achieves over 95% accuracy in detecting complex multi-stage attacks, while maintaining scalability and adaptability across different IoT environments. The findings contribute to the development of more secure, resilient, and context-aware smart infrastructure systems offering a practical foundation for real-world deployment in smart cities and connected home ecosystems, especially within developing regions such as Iraq.

Original Article
Empirical Evaluation of MQTT, CoAP and HTTP for Smart City IoT Applications
PDF Full Text
Abstract

Smart city applications demand lightweight, efficient and dependable communication protocols to facilitate the functioning of resource-limited Internet of Things (IoT) devices. This work performs an extensive empirical study of the three most prominent IoT standards; Message Queuing Telemetry Transport (MQTT), the Constrained Application Protocol (CoAP) and Hypertext Transfer Protocol (HTTP) by emulating real-world smart city use cases using a Raspberry Pi based testbed. The primary metrics based on which the protocols are analyzed are latency, message overhead, delivery rate and energy consumption. ANOVA and Tukey's HSD tests are used to determine the statistical significance of experimental data. The test results indicate that CoAP under (QoS-1 reliability) shows the least latency and energy consumption and MQTT due to its support for Quality of Service (QoS) is the most reliable. Among the others, HTTP is in general performance terms certainly at the bottom of all metrics mainly for its verbosity and synchronous nature. The paper then also suggests a decision flowchart for developers to choose the suitable protocol according to application requirements. The results are more than just numbers on a graph, and the research can be deployed for advice for protocol selection in practice, where this study helps identify issues with encryption overhead (over 75\%) while showcasing multi-hop network scalability and adaptive switch mechanisms as areas that remain to be resolved. Such findings can be used as a basis for design approaches to construct secure, efficient and scalable communication protocols in urban IoT settings.

Original Article
Enhancing User and Entity Behavior Analytics in SIEM Systems Using AI-Powered Anomaly Detection: A Data-Driven Simulation Approach
PDF Full Text
Abstract

The growing sophistication of cyber threats exposes the limits of signature-based detection in Security Information and Event Management (SIEM) systems. User and Entity Behavior Analytics (UEBA) advances SIEM by enabling behavior-based anomaly detection, yet legacy approaches struggle with high false positives and poor adaptability to evolving threats. This research proposes an AI-driven UEBA framework that combines deep learning for modeling user behavior with graph-based tools to map system relationships, enhancing anomaly detection in enterprise environments. Using datasets such as CERT Insider Threat, UNSW-NB15, and TON_IoT, we simulate diverse behaviors and evaluate performance. Our Transformer-GNN ensemble achieved an F1-score of 0.90, reduced false positives by 40%, and cut incident triage time by 78% compared to rule-based SIEM. To support real-world use, we provide an open-source pipeline integrating with SIEM platforms via Kafka, Elastic search, and a modular ML inference layer. This work bridges AI research and deployable cybersecurity practice, advancing the development of adaptive, intelligent, and robust UEBA systems.

Original Article
Deep Learning for Enhanced Anomaly Detection in Wireless Communication Networks using Channel State Information (CSI)
PDF Full Text
Abstract

This research introduces a deep learning-based framework for anomaly detection in wireless communication networks using Channel State Information (CSI)—a fine-grained physical-layer signal that captures wireless channel dynamics. Traditional detection methods often fall short in identifying subtle or evolving threats, whereas CSI provides a rich, underutilized source for context-aware monitoring. Inspired by its use in human activity recognition, we apply and compare deep learning architectures such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs), and Transformers to learn normal network behavior and detect anomalies, including spoofing, jamming, rogue access points, environmental disruptions, and Quality of Service (QoS) degradation. The system supports supervised, semi-supervised, and unsupervised settings, accommodating scenarios with limited labeled data. CSI data is collected using tools like the Intel 5300 NIC and Nexmon CSI under both controlled and realistic conditions. We benchmark our models against traditional techniques (e.g., Isolation Forests, Support Vector Machines (SVMs), Principal Component Analysis (PCA)), evaluating accuracy, false positives, latency, and robustness. To enhance transparency, we employ interpretability methods such as Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distributed Stochastic Neighbor Embedding (t-SNE). Experimental results show that deep learning models outperform classical baselines by up to 30% in detection accuracy. The Transformer architecture achieved 96.2% accuracy with a false positive rate of 3.9%, while the CNN-LSTM hybrid achieved the best latency–performance tradeoff (5.1ms inference). Compared to Isolation Forest and One-Class SVM, our framework reduced false positives by over 10–14%.

Original Article
Numerical Investigation of Temperature Distribution and Thermal Resistance in a Heat Sink Using Varied Fin Radius and Length
PDF Full Text
Abstract

This research provides a numerical investigation into the thermal and hydraulic performance of a pin-fin heat sink, considering the effects of fin radius, fin length, and the arrangement of the arrays. A 3D computational model was developed and solved in ANSYS Icepak for conjugate heat transfer and fluid flow under forced convection. The fin length was changed from 1 cm to 8 cm, and the fin radius from 1 mm to 6 mm. Inline and staggered configurations were investigated for the two types of arrangements to assess their effect on performance. The findings underscore an important trade-off between thermal resistance and pressure drop. As anticipated, the staggered configuration provided a consistent increase in the heat transfer coefficient because of better flow mixing and disruption of the thermal boundary layers. This improvement, however, came with a significantly higher pressure drop. From the analysis, it was evident that the best configuration is strongly influenced by the length of the fins. With shorter fins of 1-3 cm, the staggered array decreased thermal resistance far better than the in-line array. With longer fins of 6-8 cm, the in-line configuration frequently provided better overall performance because the mass flow rate was higher due to less pressure drop than the long, staggered path. In addition, the radius of the fin exhibited a nonlinear relationship with regard to performance. Increasing the radius provided a greater area for heat dissipation, but it also increased obstruction to the flow. For every specific length and arrangement combination, a thermal performance maximizing radius existed. This work gives important design rules for the thermal optimization of the heat sink geometry and emphasizes the importance of the staggered array, for short fin lengths, while revealing the in-line configuration advantage for long fin lengths when minimizing pressure drop becomes the main concern.

Review Article
Next-Generation of Smart Healthcare: A Review of Emerging AI Technologies and Their Clinical Applications
PDF Full Text
Abstract

The integration of Deep Learning (DL) techniques with the Internet of Things (IoT) has emerged as a transformative paradigm in the advancement of smart healthcare systems. Numerous recent studies have investigated the convergence of these technologies, demonstrating their potential in improving healthcare delivery, patient monitoring, and clinical decision-making. The ongoing evolution of Industry 5.0 in parallel with the deployment of 5G communication networks has further facilitated the development of intelligent, cost-effective, and highly responsive sensors. These innovations enable continuous and real-time monitoring of patients’ health conditions, a capability that was not feasible within the constraints of traditional healthcare models. Smart health monitoring systems have thus introduced significant improvements in terms of speed, affordability, reliability, and accessibility of medical services, particularly in remote or underserved regions. Moreover, the application of Deep Learning and Machine Learning algorithms in health data analysis has played a pivotal role in achieving preventive healthcare, reducing mortality risks, and enabling personalized treatment strategies. Such methods have also enhanced the early detection of chronic diseases, which previously posed considerable diagnostic challenges. To further optimize scalability and cost-efficiency, cloud computing and distributed storage solutions have been incorporated, ensuring secure and real-time data availability. This review therefore provides a comprehensive perspective on smart healthcare innovations, emphasizing the role of intelligent systems, recent advancements, and persisting challenges in the domain of digital health monitoring.

Review Article
Advancements in Automated Cheating Detection Systems for Online and In-Person Examinations: A Comprehensive Review of Methods, Technologies, and Effectiveness
PDF Full Text
Abstract

Authenticity of tests as a measurement tool has received a lot of attention within learning institutions due to emergences of online classes and remote test administration. Supervision and invigilation methods do not always suffice to deter students from cheating, and thus Academic Cheating Detection Systems (ACDETS) have been invented. This paper presents a critical analysis of the current approaches for identifying cheating in online and face-to-face examination systems. There are plenty of approaches, including behavioral approach, facial expressions tracking, gestures recognition, voice analysis, and video monitoring. CNN (Convolutional Neural Network) algorithms, RNN (Recurrent Neural Network) algorithms, and YOLO models, for instance, have shown great enhancements in both accuracy and scalability of detecting suspicious behaviors. The paper further compares the merits and demerits of these methods and also looks at the possibility of using them for real time detection, large setting for exams, and varied testing conditions. This paper is finalized by the evaluation of the practical applicability of the findings, limitations, and further research prospects concerning the monitoring of academic integrity.

Case Study
Beamforming, Handover, and gNB Optimization for 5G/6G mmWave in Enterprise Networks: A ns-3 and NYUSIM-Based Study
PDF Full Text
Abstract

This paper presents a simulation-based framework to optimize 5G/6G mmWave network deployments in enterprise environments. Using ns-3 and NYUSIM, it evaluates next-generation Node B (gNB) placement, beamforming, and handover strategies across factory, office, and campus settings. Leveraging the inherent high bandwidth and low latency capabilities of mmWave technology, this study systematically addresses critical challenges such as severe signal attenuation, dynamic blockage, and efficient beam management in complex indoor and outdoor enterprise settings, including large-scale industrial complexes, multi-floor smart offices, and expansive university campuses. Utilizing established open-source network simulators, specifically ns-3, and integrating publicly available, industry-standard channel models such as 3GPP TR 38.901 and NYUSIM, the research proposes and rigorously evaluates novel deployment strategies, advanced beamforming techniques, and intelligent handover mechanisms. The anticipated outcomes include validated guidelines for optimal base station placement, robust performance benchmarks for key enterprise applications (e.g., Ultra-Reliable Low-Latency Communication (URLLC), enhanced Mobile Broadband (eMBB), massive Machine-Type Communication (mMTC)), and a robust, extensible simulation framework. This work aims to provide critical, data-driven insights for telecommunication providers and network planners, enabling them to design and implement superior, reliable, and future-proof 5G/6G connectivity solutions, thereby accelerating digital transformation across various industrial and commercial sectors.