IoT based Low-cost Posture and Bluetooth Controlled Robot for Disabled and Virus Affected People

—IoT-based robots can help people to a great extent. This work results in a low-cost posture recognizer robot that can detect posture signs from a disabled or virus-affected person and move accordingly. The robot can take images with the Raspberry Pi camera and process the image to identify the posture with our designed algorithm. In addition, it can also take instructions via Bluetooth from smartphone apps. The robot can move 360 degrees depending on the input posture or Bluetooth. This system can assist disabled people who can move a few organs only. Moreover, this system can assist virus-affected persons as they can instruct the robot without touching it. Finally, the robot can collect data from a distant place and send it to a cloud server without spreading the virus.


I. INTRODUCTION
The Fourth Industrial Revolution (or Industry 4.0), a term coined by Klaus Schwab, refers to the industrial and manufacturing activities automation through using emerging technologies. Robotics as well as the Internet of Things (IoT) is a key element among such modern technologies. In recent times, robots and robotics is the part and parcel of modern science. Robots are working in lots of sectors including the dangerous, monotonous as well as restricted areas where there is a huge risk to human lives. It is a gift of modern artificial Intelligence. People implemented different types of robots for different works. IoT is an ecosystem where things have a unique identifier, embedded with sensors, and actuators, and can send or receive data over the internet. A fusion of both IoT and robotics or in individual manner, these advanced technologies are being used pervasively in many sectors, for example, smart home, agriculture-paddy cultivation [1], smart campus, smart city, environmental weather monitoring, industry automationpoultry farm automation [2], hotel management [3], health areas-patient management [4] as well as taking care of patient by providing nursing facilities [5], remote sensing and monitoring, virus affected and disabled people management [6]. However, the interaction between humans and robots is still a challenging task. Apart from verbal communication, humans use non-verbal signs like postures and gestures to communicate with each other. Non-verbal communication is an instinctive way of communication. It is done by facial expression, gestures/postures, gait, or head nods. It is noteworthy that there is a slight difference between gestures and postures. For example, hand gesture refers to a dynamic state of hand movements whereas hand posture refers to the static state of the hand. In this work, we have proposed an IoT-based system to command a robot with Bluetooth through mobile phone and hand postures, especially beneficial for physically challenged as well as virus-affected people. Apart from this, another system is integrated to measure the body temperature of the person through a contact-less IR temperature sensor and another sensor to measure the temperature and humidity of the room. The collected data will be stored in a secure thingspeak cloud server. Here, hand posture is used as an input to the robot and according to the input, the robot is designed to move towards 360 degrees of movement to left/right/forward/backward and to stop. This robot takes hand posture as input by a pi camera, then converts the posture into commands and performs accordingly.
The objectives of this work are as follows: 1) People can call the robot by hand postures and Bluetooth command to give the robot 360 degrees of movement. As this robot is controlled in two ways, namely, hand posture taken by a pi camera and also mobile Bluetooth, the system is reliable because if one type of controlling fails, the other type may work. 2) This robot may work in virus-affected areas with virus-affected people without making harm to healthy people eventually helping to stop community transmission.
3) The robot can collect data and send them to a cloud server for future monitoring.
The rest of the paper is arranged as follows: Section II discusses the related works associated with this paper, Section III demonstrates the system model, algorithms, circuit diagrams and flowcharts of the work, Section IV shows the experiment results, comparisons of our work with some other works, limitations and finally Section V provides the concluding remarks and future scope of this work.

II. LITERATURE REVIEW
Researchers explored several approaches to detect hand postures in their works. In literature, the recognition systems are designed based on numerous methods such as deeplearning, 3D Model, depth, skeleton, motion, appearance as well as color [7]. Boyali et al. [8] proposed six different hand gesture and posture (1-Fist, 2-Hand Relax, 3-Fingers Spread, 4-Wave In, 5-Wave Out, 6-Double Tap) recognition system with an accuracy of 97% after receiving electromyography (EMG) signal which is acquired by eight myo armband attached to a person's arm. Here, sparse subspace clustering (SSC) and collaborative representation based classification (CRC) are used to train and recognize patterns. Nguyen et al. [9] demonstrated a hand posture recognition framework consisting of hand detection through traditional hand detector viola jone using haar-like and cascaded adaBoost, low level feature extraction on hand region through image conversion as well as pixel-based extraction, a combination of three kernel descriptors (KDES), namely, gradient, pixel value and texture KDES for hand representation on HSV, RGB, lab color channel and finally multi class support vector machine (SVM) for classification. They have reported an average of 97.3% accuracy on the NUS-2 data-set and 85% on their data-set within 1m to 3m distances. IoT based systems and robots are helping mankind in remote monitoring [10], remote sensing [11], disabled patient management [12], virus affected people management [13] and so on in all times [14] in secure way [15]. Embedded systems and IoT are helping hospitals [16], Agricultural Systems [17], energy generation [18], electronic voting [19] and bio-metric [20] security systems. Akhund et al. [6] reported a 97.9% success rate to recognize hand gestures of disabled as well as virus-affected people to operate the robot. They have developed a robotic agent consisting of an MPU6050 accelero-meter gyroscope sensor, Arduino nano, 433KHz radio wave receiver, L293D motor driver IC. The sensor is responsible for tracking the movement of the hand. Alam [24] implemented two traditional machine learning algorithms namely SVM with directed acyclic graph and K means clustering for the classification of hand postures receiving through Kinect v2. The four gestures (forward, right, left and stop) are used to operate the bioloid premium robot consisting of IC 74LS241N, Arduino mega, kinect v2. They tested hand gesture recognition from the distance of 2, 3 and 4 meters where the range of the body slope was 45, 0 and 45 degrees. They have reported 95.15% accuracy in 10 ms and 77.42% accuracy in 4.45 ms using SVM and k-means clustering algorithm respectively. Meghana et al. [25] designed a system to move a robot towards forward, backward, right and left direction through identifying hand gesture as well as voice especially for the people who are unable to see or hear. The hardware used in the system is MPU6050, L293D driver, LCD, HC05 Bluetooth module, Arduino uno. Abed et al. [26] experimented with a vision-based hand gesture recognition system that can identify five different hand gestures to move a mobile robot towards forward, backward, left, right as well as stop. Raspberry Pi camera module is used to track hand gestures. The hardware that is used to process and provide commands to the robot along with a pi camera is Raspberry Pi 3 model B, L298N motor driver, power supply, camera board, 5 inches 800*480 resistive HD touch screen, rover 5 two-wheel drive platform. They have used Open cv library of python programming language to perform the task. The system showed 98% recognition accuracy. Su designed a 10 types of hand gestures recognition system based on depth vision and surface EMG signals. Here, leap motion controller collects depth vision data finally labeled by hierarchical K means clustering. In addition, myo armband is used to receive and transmit EMG signals. Then, preprocessing is done by a band-stop filter, and a band-pass filter. After feature extraction, finally, multiclass SVM is used to classify the signal. Adithya et al. [27] employed a deep CNN with rectified linear unit (Relu) activation function to automatically recognize hand gestures. They have trained and tested the model using 2 datasets namely the National University of Singapore (NUS) and the American fingerspelling dataset. The model showed 94.7 ± 0.80% accuracy, 94.96 ± 1.20% precision, 94.85 ± 1.30% recall, 94.26 ± 1.70% f1-score in case of NUS dataset and 99.96 ± 0.04% accuracy, 99.96 ± 0.04% precision, 99.96 ± 0.04% recall, 99.96 ± 0.04% f1-Score in case of American fingerspelling dataset. Chansri et al. [28] presented a skin color technique consisting of an RGB camera along with Raspberry Pi to control hand gestures. They have experimented on the American sign language dataset with 12 gestures and achieved 90.83% accuracy. Mondal at el. [29] developed a temperature detector product module for measuring the body temperature of covid 19 affected patients with 98% accuracy. The hardware part consists of LoLin NodeMCU V3 with ESP8266 module, DS18B20 temperature sensor probe, passive buzzer, LED and flat vibrator motor whereas programming is done in Arduino IDE. This system creates an alarm using a buzzer when the temperature exceeds 100.4°F or 38°C temperature. To store the data, they have used ThingSpeak cloud server. IoT based systems are helping people to make cheap irrigation systems [30], Seamless Microservice Execution [31], Big Data Processing [32] and blockchain technology [33]. IoT is helping lung cancer diagnosis [34]. Fog computing plays a vital role for augmenting resource utilization [35]. Cloud computing is helping to analyse people sentiment [36] and dynamic task scheduling algorithms can make it more efficient [37]. So, we should make proper use of IoT and robotics in medical science.

A. Requirements
To implement the prototype we used Raspberry Pi 3B+, pi camera 5MP and python 3 programming language for the image processing part. The robot motion controlling part consists of a robotic chassis, breadboard DC gear motor, motor driver L298N, battery, wires, battery charger. Instead, the Arduino UNO Micro-controller, HC-05 Bluetooth Sensor, DHT11 sensor, IR temperature sensor MXL90614, Node MCU ESP8266, Wi-Fi, ThingSpeak cloud server and C++ programming language for Bluetooth controlling and data sending part.

B. System Model
The diagram of the system methodology of the robot movement is depicted in Figure 1. For recognising the posture with image processing we have used Raspberry Pi and Pi camera. The robot can be controlled via Bluetooth also. We could include Bluetooth module with the Raspberry Pi too but we have done it with Arduino uno to make the system more durable and easy. If Raspberry Pi fails any time to detect posture then Arduino will work with following the instructions from mobile.

1) Posture Detection Part:
Posture detection is done with image processing using python programming language on Raspberry Pi 3 B+. Pycharm IDE community version was installed on Raspberry Pi. At first, we need to install two library function numpy and opencv. Before plugin library function we updated the pip version 2 to version 3. Then (while (cap.isOpened)) executes video capturing. We crop correct image size. Then we convert color in gray scale and create window for mapping the picture. After that conditions to find motion from the window will be executed with our image processing algorithm. If the motion found then motion pixel detect. We also plot detected motion in window map to simulate the detection in computer screen before applying it in Raspberry Pi and robot. Then the motion data and detected posture creates movement command for the robot to move 360 degrees. The movement commands goes to the GPIO ports of Raspberry Pi and send signal to the motor driver and the motor driver moves the 2 motor of the robot.  The circuit diagram of posture detection and robot controlling with Raspberry Pi is mentioned in Figure 3. 2) Bluetooth Sensing and Controlling Part: Movement with Bluetooth sensing from mobile phone is another part of this project. We used the Arduino IDE to write the C++ programming language for Arduino UNO Micro-controller. The Arduino will receive the signal sent from mobile phone with Bluetooth sensor HC-05. Android or IOS mobile phones can send any data via any Bluetooth data sender mobile app (we used Bluetooth RC controller available in app store and play store) to HC-05 after pairing. Arduino can receive that data with serial communication (Tx and Rx). After receiving the data in Arduino, conditions for 360 degrees movement will be executed. Then the signals for forward, backward, left, right and stop will be sent to the motor driver L298N. Then the motor driver will move the 2 DC motors of robot by following the commands. Algorithm for Movement Controlling with Bluetooth is mentioned in Algorithm 2.
Algorithm for Data sensing and sending with NodeMCU is mentioned in 3.
Our system can detect two finger successfully. Figure 8 shows our system detect 3 fingers successfully.
The system can also detect two finger successfully. Figure  9 shows our system detect 4 fingers successfully. Finally, our system detects five fingers form human hand. Figure 10 shows our system can detect 5 fingers accurately.
The system shows the collected data in the ThingSpeak server (Location: https://thingspeak.com/channels/739817). Figure 11 shows Collected data visualisation in cloud server.

A. Prototype Output
Final view of the Prototype Robot is mentioned in Figure  12, Figure 13 and Figure 14.

B. Obtained Features and Results
We tested our system for posture detection, Bluetooth controlling and data sending to cloud server. We got desired results. The features obtained from the prototype are as follows: 1) The robot is able to recognize the posture and fingers of a patient and move towards the patient. It showed 95% success rate in 500 tests. 2) The robot can move 360 degree by following the instructions from posture. It showed 95% success rate in 500 tests.
3) The robot also works using Bluetooth sensor. It showed 98% success rate in 500 tests. 4) It can also make movement by the instructions from mobile phones via Bluetooth with 97% success rate in 500 tests. 5) The robot successfully collected temperature and humidity data from people and environment and showed in oled display with 96% success rate in 500 tests. 6) The robot successfully sent the collected data to cloud database with 94% success rate in 500 tests. 7) The system is cost effective. The prototype costs less than 100 USD. Where some existing systems may cost more than 500 USD without having all of the features obtained in this work.

C. Limitations
The system has some limitations also which can be mitigated in future development. The limitations we have found 1) The system is not water proof. Water may damage the full or partial system. 2) For making the system low cost we used low cost sensors that may make incorrect results some time.
3) The robot should be maintained carefully. Otherwise it can be broken.

D. Discussion
The main objective of this paper is to operate a robot using hand gestures or a mobile phone through Bluetooth and after then, collect data and store them on a cloud server. The process of making hand gestures as input has been performed through an embedded pi camera, processed to recognize correctly as well as respond accordingly. The whole process takes on an average of 1500 milliseconds at the rate of 97% accuracy. Alam et al. [21] designed hand gesture-controlled robot using MPU 6050 gyroscope module and Arduino nano with an accuracy rate of 93.8%. Maharani et al. [24] performed 1080 number of tests in total by 6 different people with four gestures (forward, right, left, and stop), from three distances (2m, 3m, 4m), and at three slopes position (450, 00, -450). According to them, SVM showed superior results than K-means clustering with 95.15% recognition accuracy in 10ms. Hand gestures are used to control the home appliances [22] with the accuracy rate 87% in 3 meter distance. They utilized a Kinect sensor to take hand gestures with three-hand states(open, close, and lasso), and processing is performed through Raspberry Pi. Meanwhile, Xing et al. [38] achieved 83.23% accuracy using a little modified CNN to surface electromyographic signal for hand gestures recognition. Hence, it is evident that our proposed work provides superior the result compared to other existing methods summarised in table I.
Our robot also contains a non-contact temperature sensor MXL90614 which can detect Covid-19 affected patient's body Nazzi et al. [38] 95.01% Maharani et al. [24] 95.15% Our Proposed Model 97% temperature at a distance and send it to the cloud without wearing any devices in the body like the work of [29]. These above discussion shows the advantages of this work.

V. CONCLUSION
In the modern era, robots are engaged in a lot of sectors. This work implemented an IoT-based posture recognizer remote sensing robot. Hospital patients can call the robot with hand posture and smartphone via Bluetooth; then the robot can go to the patient by following posture or Bluetooth command. Then it can collect data with sensors and send them to a cloud database. Therefore, any disabled or virus-affected person can control it and be monitored with this system remotely without affecting any healthy people. We got around 95-97% success rate among all the features. In future work machine learning features can be added to this robot to predict the patient and environmental conditions.