Predictions about the evolution of the Internet of Things (IoT) in the next years are optimistic. The number of interconnected devices will continue to grow exponentially, as well as the amount of data that they report.
Part of this data will be generated by wireless sensor nodes organized in Wireless Sensor Networks (WSNs) to transmit their measurements to Gateways (GWs). However, wireless sensor nodes are mainly designed to have low costs, which implies constrained memory and energy supplies, and does not permit the streaming of measured data at high data rates.
Meanwhile, modern uses of WSNs rely on the knowledge acquired by sensor nodes to trigger reactions in other systems, and sensed data has become critical to avoid economic–and living–losses. Therefore, it is important to optimize data transmissions in WSNs to support not only a higher number of wireless sensor nodes but also a higher diversity of
Solutions for data aggregation and data compression have reduced the number of gross transmissions, but they did not solve the problem of transmitting measurements that do not convey knowledge to the WSNs’ managers. These solutions do not exploit the fact that, fortunately, WSNs are asymmetric and, contrary to ordinary wireless sensor nodes, GWs have an Internet connection with no critical computational, power or communication limitations. Hence, GWs can run algorithms and process amounts of data that wireless sensor nodes do not support, which permits them to predict the data that will be measured.
This thesis extends a paradigm that exploits WSNs to the utmost: data that can be predicted does not have to be transmitted.
First, we design a self-managing WSN architecture that adopts a standardized communication to integrate WSNs into data analysis services in the cloud. To evaluate our idea in experiments, we implement the Data Analytics for Sensors Dashboard (DAS-Dashboard) to control and optimize, using specialized cloud services, a WSN via the Internet. Our experimental results show that the interconnection of remote components does not imply a significant overhead and that the architecture is feasible in practice. Then, relying on this architecture, we design a mechanism to adjust the sensor nodes’ sampling intervals according to the changes observed in the environment. The novelty of this mechanism is in the use of a Reinforcement Learning (RL) technique called Q-Learning. Simulation and experimental results show that this mechanism provides necessary means to make a smart WSN with the capacity of self-optimizing. As a result of hardware evolution, new wireless sensor nodes have extended memory and computing capabilities; and more sophisticated prediction algorithms were adopted in sensor nodes. In response to that, we analyze the benefits of incorporating the current state-of-the-art prediction algorithms in WSNs. The results are promising: our simulation results show that it is possible to eliminate WSN transmissions without reducing the quality of the measurements provided in several sensor network applications.
For the future generations of WSNs, we design a theoretical model for characterizing the number of transmissions in WSNs, which can provide reliable estimations about the efficiency of prediction-based data reduction methods. The new model will support the WSNs’ growth regarding the number of sensor nodes in a single network and the quality of information processed by their GWs. The prediction-based strategies investigated in this thesis can impact the present and the future of the IoT. Current WSNs can be optimized to avoid unnecessary transmissions with the help of the cloud. Also, coming generations of WSNs will be supported by our WSN transmission model to adopt prediction algorithms and maintain strict control over the quality of the reported data without being harmed by the adoption of a higher number of sensor nodes; hence, collaborating to the IoT’s growth.
Link to the thesis: http://gmdias.com/MartinsDias_Thesis.pdf