Smart city applications demand lightweight, efficient and dependable communication protocols to facilitate the functioning of resource-limited Internet of Things (IoT) devices. This work performs an extensive empirical study of the three most prominent IoT standards; Message Queuing Telemetry Transport (MQTT), the Constrained Application Protocol (CoAP) and Hypertext Transfer Protocol (HTTP) by emulating real-world smart city use cases using a Raspberry Pi based testbed. The primary metrics based on which the protocols are analyzed are latency, message overhead, delivery rate and energy consumption. ANOVA and Tukey's HSD tests are used to determine the statistical significance of experimental data. The test results indicate that CoAP under (QoS-1 reliability) shows the least latency and energy consumption and MQTT due to its support for Quality of Service (QoS) is the most reliable. Among the others, HTTP is in general performance terms certainly at the bottom of all metrics mainly for its verbosity and synchronous nature. The paper then also suggests a decision flowchart for developers to choose the suitable protocol according to application requirements. The results are more than just numbers on a graph, and the research can be deployed for advice for protocol selection in practice, where this study helps identify issues with encryption overhead (over 75\%) while showcasing multi-hop network scalability and adaptive switch mechanisms as areas that remain to be resolved. Such findings can be used as a basis for design approaches to construct secure, efficient and scalable communication protocols in urban IoT settings.
This research introduces a deep learning-based framework for anomaly detection in wireless communication networks using Channel State Information (CSI)—a fine-grained physical-layer signal that captures wireless channel dynamics. Traditional detection methods often fall short in identifying subtle or evolving threats, whereas CSI provides a rich, underutilized source for context-aware monitoring. Inspired by its use in human activity recognition, we apply and compare deep learning architectures such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs), and Transformers to learn normal network behavior and detect anomalies, including spoofing, jamming, rogue access points, environmental disruptions, and Quality of Service (QoS) degradation. The system supports supervised, semi-supervised, and unsupervised settings, accommodating scenarios with limited labeled data. CSI data is collected using tools like the Intel 5300 NIC and Nexmon CSI under both controlled and realistic conditions. We benchmark our models against traditional techniques (e.g., Isolation Forests, Support Vector Machines (SVMs), Principal Component Analysis (PCA)), evaluating accuracy, false positives, latency, and robustness. To enhance transparency, we employ interpretability methods such as Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distributed Stochastic Neighbor Embedding (t-SNE). Experimental results show that deep learning models outperform classical baselines by up to 30% in detection accuracy. The Transformer architecture achieved 96.2% accuracy with a false positive rate of 3.9%, while the CNN-LSTM hybrid achieved the best latency–performance tradeoff (5.1ms inference). Compared to Isolation Forest and One-Class SVM, our framework reduced false positives by over 10–14%.