Internet of ‘Thing’
Blog by Geoff Routledge
In the IoT space we often see the generic IoT architecture described as multiple layers mapping the function of various components all the way from sensors and actuators, through the various connectivity and network layers and up into the cloud where use can be made of the data. Decisions can be propagated back down the stack to effect actions on the “Things”.
Various interested parties have slightly different interpretations of this and this will often be influenced by their interest in the market; it is natural that telecoms providers would see it from the connectivity angle first and that sensor manufacturers would concentrate on the embedded hardware. Of course, the fact is that the IoT phenomenon relies on all of these areas. Advances in all areas of technology and communications plus the plummeting prices of this technology have made it not just possible, but inevitable that practically every electronic device will eventually be connected in some way. You know that the technology is insidious when you consider that we no longer bat an eyelid at the idea that a lightbulb contains a web server or that a key fob connects to the Internet.
Anyway, returning to the stack, a very simplified version of it might be seen as follows:
There’s a wealth of information on the IoT reference model in various places but to summarise, we visualise the system as being composed of a number of layers.
At the lowest level we have the physical devices and controller devices (where the thing being controlled does not connect directly). These are the “things”. Things may be as simple as a single sensor or could be as complex as a set-top box or a vehicle.
The next layer is connectivity. There are a number of protocols that come into play here and precise technology will depend on a number of factors including availability of wired connections, availability of wireless networks and power requirements. Although Ethernet and Wi-Fi are often used, other protocols such as Bluetooth, LoRa or Zigbee may also come into play depending on the circumstances and requirements of the device. Other variations may also include the use of mesh networks where the data may flow through one or more devices on its way up the stack.
The next layer is the edge or fog computing layer. It is here that data is collected from sensors and preliminary processing applied before forwarding upwards. At this layer, we might perform functions such as error checking, format conversion and cryptography in order to sanitise the raw data received from the devices.
Layer 4 deals with accumulation and storage. Typically, we’ll see a change here from data in motion, i.e. travelling upward from devices, to data at rest, i.e. stored and made available to other processes.
Layer 5 – data abstraction – deals with the presentation of the collected data to functions that might make use of it. This layer can also aggregate data from multiple sources and present a common interface.
The application layer is where we make use of the collected data in applications. These might take many forms; they could run as a traditional application or they might be web-based or run on a device such as a smartphone or tablet. Applications might also send information back down the stack in the opposite direction. For instance, a heating control might send instructions to heat different zones or change the temperature.
The final layer, collaboration and processes, is more people-focused. It is here that people and processes can make use of the services provided and effect change. It is at this layer that we might also see collaboration between people, business processes and the “things” through the applications provided in the lower layers.
The above is far from comprehensive but is a summarisation of the generally accepted layered model of the IoT stack.
I recently bought a classic car. This may seem like a bit of a non-sequitur but bear with me. Many years ago, when I bought my first car, I got myself a Ford Granada MK2, 2.3GL. I had a lot of fun with this but of course always wanted the better specced model. So now, many years later and after a considerable amount of arm-twisting within the family, I am now the proud owner of a 1983 Ford Granada 2.8 Ghia X and have simultaneously lost all remaining garage space.
Having bought the car, I immediately became concerned for the car’s wellbeing and started to worry about things such as temperature, humidity and battery voltage. Being of a somewhat geeky nature, I considered the options and concluded that what the car needed was a set of sensors connected through Wi-Fi to a server application making the information available over the Internet – an Internet Thing. I hear that other people in the classic car world conclude that a car cover is a good idea but I guess we all think differently.
Thinking this through, it became apparent that what was needed was the following:
- A set of sensors;
- A means of polling the sensors and aggregating data;
- A means of transmitting the data to a server, in this case over Wi-Fi (since the garage isn’t yet wired for Ethernet);
- Something to collect the data;
- Something to serve and present the data; and
- Client(s) to view the data.
Design and build would be too strong a word for this, “cobbled together”, “miraculously somehow working”, “an abomination” would be better phrases to describe the process through which this came into being. Still, this was an experiment put together in bits of spare time and yet still yielded some interesting results.
A bit of Googling revealed that a popular, inexpensive temperature and humidity sensor is the DHT22. These use a proprietary one-wire protocol to transmit their data to a microcontroller and thus only require a VCC, GND and data connection. The data line is bidirectional; a simple handshake is initiated from the microcontroller after which it then swaps the line to an input to allow the sensor to transmit its data.
The sensor can report temperature and relative humidity as two 16 bit numbers plus another 8 bit checksum.
Gathering the Data
There are of course many ways and many pieces of hardware that could gather the data. However, a perennially popular board is the Arduino range. These boards have the advantage of having a second microcontroller on-board handling a connection to a USB port, this means that the main microcontroller, by virtue of a bootloader, can be programmed directly from a laptop with a suitable IDE simply by plugging into a USB port. Even better, during development, the same USB port can be used as a serial connection allowing commands to the sent to the application on the board and for debug information to be sent back over the serial connection. The board can be programmed using a dialect of C++ and a number of IDEs and frameworks are available including the Arduino IDE, Visual Studio (with plugins), Eclipse (with plugins), PlatformIO and Atom.
In our application, the board has very simple task:
- Initialise the sensors;
- Poll the sensors periodically;
- Listen for requests for data and return these.
Initialisation is a fairly simple affair; the sensors each need to be modelled in the code and an object created representing them. This object can then manage the state of the sensor through a state machine which also allows the sensor to be queried concurrently with other tasks.
Polling can be done on a timed basis. Alternatively, a poll can be initiated through a command.
Requests and responses are carried out through a serial connection. With this board, there is an inbuilt UART and good support for serial so it becomes fairly straightforward to listen for commands over the serial connection and reply with the response over the same. Another possibility would be to use the Serial Peripheral Connect (SPI) protocol, a protocol specifically designed for inter-chip communication, although this is not necessary here.
Transmitting the Data
Of course, reading the sensors is only the start of the story, we now need to gather that data somewhere and do something with it. Enter the ESP8266 chip in the form of an ESP01 module. The ESP8266 is a very inexpensive, very clever, transceiver chip for Wi-Fi. It has all the RF elements on board and features a moderately powerful processor core to drive things. The ESP01 module takes the chip, adds an antenna and flash memory and provides a solution for connecting to Wi-Fi. The module can be treated as a modem and controlled through its serial interface using the AT command set. Alternatively, the module can be programmed directly using a framework built from the Arduino code with additions for the Wi-Fi functionality and with the advantage that all the IDEs mentioned above are available for development.
My initial take on this was to have the code on the Arduino module drive everything and simply use the ESP01 as a modem over the serial connection. However, as it turns out, the ESP01 actually has a more powerful processor and more memory and so the eventual solution turned this on its head and had the Arduino module act as a slave to the ESP01.
Aggregating the Data
As with most IoT type solutions, a means is needed to aggregate and store the information. Frequently, this function resides in the cloud and there’s nothing that would prevent this approach from working here. However, since I already have a suitable server on the home network, this was used instead. The aggregation function is pretty basic and uses a Java application with an embedded Derby database for the storage. Interfaces from the sensors are through http:POST requests with this element being handled by an embedded Jetty server. Obviously, this would not scale out to production use but the use of embedded components makes the deployment of this server a very simple affair.
Presenting the Data
The intention was to have the information readily available on a mobile device. One fairly straightforward way to achieve this is to use HTML5/CSS to create a web interface. This interface can work equally well on desktop devices and mobile devices. Later versions could use a responsive approach to render the display differently for these devices but in the initial incarnation the approach is more simple.
Mapping on to the Model
Having worked through these parts, we now see that these naturally fall into the classic model of IoT architectures. The diagram below shows the rough mapping. Some elements span the layers as their functions overlap. For instance, the ESP01 module handles wireless connectivity to the hub but also has logic that can collect the data from the sensors and package this up into a summary presented as a JSON message. Similarly, the server hosts the application and database thus managing the data accumulation and abstraction parts; another part of the same application presents an HTML user interface (although, of course, this could easily be separated into a separate application).
Where to Next?
As it stands, for the prototype, there is nothing in the way of provisioning; the Wi-Fi credentials are hard-coded into the ESP modules. This would obviously be unsatisfactory for anything other than a prototype.
One possible approach to addressing this would be to take advantage of the fact that the ESP01 module can act as an access point. In this mode, a device such as a laptop could connect and receive an IP address from the Wi-Fi module using DHCP. Having done this, the ESP01 module could present a basic web page for configuring connection details following a reboot. Some password management and a default password would add some very basic, although likely inadequate, security.
Another possibility for provisioning is to use the Wi-Fi Protected Setup protocol offered by some routers. The libraries for the ESP8266 support this mode of operation and thus it should be possible to implement a scheme whereby the sensor can be paired with a Wi-Fi hub.
Both these approaches look promising and I’ll report back if I get a chance to try them out in a later blog post.
In the prototype, there is no security. Going forward, it is essential that some sort of security be added. We increasingly keep hearing of instances where IoT devices are compromised and then form an attack vector into other systems or are used as part of a DDOS attack.
A minimum approach would be to encrypt communications using a shared secret key. An extension of this would be to implement full TLS with certificates although this can form quite an overhead for the often limited resources of IoT devices.
One problem with the current setup is that it is somewhat power hungry for an IoT device. The two different voltages (5V and 3.3V mean that more power is wasted in multiple regulators), two modules and always on nature mean that the module is always drawing power. Additionally, the radio on the ESP chip draws a fairly hefty couple of hundred milliamps when running meaning that battery operation would be fairly short lived.
One possible solution for a future version would be to use a different incarnation of the ESP module that brings more of the ESP chip’s IO lines out. This way, the ESP module could do everything, including interfacing to the sensors. Power could also be reduced by using the ESP’s sleep feature allowing the chip to be put to sleep and awoken sometime later by its own timer. This way, the module could run off batteries and still have a reasonable operational life sending data at intervals of around 5 minutes.
One interesting observation is how this small-scale project mapped onto the generic layered model for IoT. This was not a conscious decision, it’s just that the most logical way of achieving the objective happened to be a fairly close match to the model.
Another thing that this exercise made clear is the number of elements required for a typical IoT system. The elements range all the way from the hardware through to servers, cloud and mobile devices for eventual data display and control. The number of disciplines required to successfully implement a solution are many more than in a typical IT deployment. In a commercial environment, this would translate to having many teams, each concentrating on a different set of the layers placing a strong emphasis on the need to coordinate these teams in the integration of the project.
Want to know more?
Call +44 (0)2392 987897 or send an email to arrange an informal discussion.
BCi Digital is a highly experienced and flexible systems integrator with worldwide deployment experience across a number of key technologies and operators. The company is owned and managed by a dedicated core team of industry specialists drawn from the broadcast, broadband and telecommunications industries.
Through a thorough programme of industry research, BCi Digital has an in depth understanding of new and emerging technology and its expertise can be applied to deliver innovative solutions to a client’s requirements.