Over the last few years, we’ve really enjoyed working with a number of Internet of Things (IoT) partners — providing a seasoned software product development team who have teamed up with the hardware & mechatronic engineers building a physical product. The combination has been a long-term partnership, where we each have a clear focus and are able to work together to bring to market a complete solution. Our motto — when building these partnerships is, “Plan big, start small.”
To do this we use an uncomplicated scalable system architecture that allows for easy growth, but wont break the bank in those critical first phases. In this blog post, I’ll explain how we typically approach an IoT opportunity by using an example of a fictional company “CafeNXT”.
They have had the great idea to build a fully automated barista in a box, including friendly banter and coffee card functionality, called CoffeeNXT.
While CafeNXT are predicting their combination of automation and design to take the market by storm, at the moment they’re just dog-fooding in their own building at the moment.
The things in IoT come in all shapes and sizes, from small CO2 sensors to screens and robotic coffee machines.
We’ve found in business applications that the system is likely to have multiple things, all working together to provide a richer set of data and interactions to the end user, this is made easier if you have control of the fit-out or installation of the things. In these cases you can have things that are a collection of other things, which are aggregated in the back-end or are gathered within another centralised device.
CafeNXT have decided that they can leverage the CoffeeNXT, by increasing the offering with a screen and a building health sensor. These things would be separate from the CoffeeNXT, but ideally create a “virtual cafe” that they can monitor as a whole.
Things come in 3 favours based on their principle purpose:
1) things that gather data, such as the building heath sensor
2) things that display data, such as the screen
3) things that are designed to have more complex interactions, such as the CoffeeNXT.
All these things have to be able to communicate back to base, to send the data they have gathered, know what to display, or how they have interacted.
Two important parts of this communication are being able to configure and provision a device, and being able to get remote diagnostics for debugging, as once you have deployed a fleet of things, you don’t want to have to physically go to each device to be able to roll out configuration changes, or be able to increase your observability of what’s going on in a thing.
Once you have the things, they need to be able to communicate with the larger systems. We call this block the “thing API”, its purpose is to regulate the communication between the things and the larger system.
Like using a simple programmable board like a Raspberry Pi when you are prototyping a thing, we have found that a simple HTTP based API is best when starting off. HTTP is a well understood protocol and implemented that will transit most networks with relative ease, and its request / response model make it easy to move data to and from the thing and API.
If you find that after the initial development that there is a need to use a different protocol, then it is easier to move from HTTP to another, than moving from one niche protocol to another.
To enable continuous testing we suggest creating test things, that are programs that can replicate standard known patterns of connecting to the thing API, so that you can automate test states that the thing will get to without having to drop the CO2 in the room to dangerous levels, for example.
From the thing API we flow to the “rules engine”. The rules engine responds to the data from the things and tells the API what configuration and controls to send back. Implementing a rules engine allows for expansion later as it lets you slot in new interactions when you find you need them. The simplest of these interactions is to pass all the data across to the “data store” for keeping, but much more complicated interactions can be conceived, for example when the coffee machine machine sends a message that it has run out of coffee; then send an order to the procurement system to order more and monitor its tracking on the screen in the kitchen.
The heart of the rules engine is a service bus that will allow you to process data and requests asynchronously and scale well. A service bus separates the request from its actual processing, meaning that you can scale the amount of processing to match the request demand as required.
Mentioned above, the “data store” is actually comprised of multiple parts:
One part is for the recording and archiving of sensor data and other telemetry of the system. This data makes sense to always time stamp with when it happened, it is stored in a specialised time series database.
Another is the configuration and general data model of the system, this can be stored in the database of your choice. Care should be taken to make the data multi-tenanted so that multiple clients can use the same database, increasing density at the start, and allowing you to move high value clients to their own database if required.
A final data store may be required to aggregate the time series data and configuration for reporting and caching purposes. Typically this can be implemented once you have realised the data volume that makes it cost effective. This store becomes increasingly useful once you introduce machine learning to the mix.
An often forgotten part of the back-end is a “support portal” that allows the support team to monitor the overall health of the system, from queue lengths to last check in times across all the things. It is good to have a portal to pull together all the different places that monitoring and alerts get generated.
This is a good spot to automate all those repetitive tasks that the first and second line support staff need to perform in the normal course of supporting clients.
Added to this is the bonus that this site can be bound to your current identity provider so that you can track the various privacy and security concerns created by such a portal.
Once you have the things and infrastructure to capture data, and send commands, you need “client applications” to let your customers visualise and control the things.
Take CafeNXT, with this all in place, they need an application for their customers to be able to choose content for their screen, see the health of the cafe and see how that barista changed the lives of its employees.
We once again opt for the simple approach here of a multi-tenanted web application. The web application is tied into the rules engine and data stores, so that it can push commands out to their things, and visualise the data coming in.
Here is it worth spending some time, thinking about the different personas that may want access to the data and the various functions of the things. In the case of CafeNXT, its worth thinking about; the facilitates management view (who want to know utilization, and heath), the content mangers view (who want to be able to set the content of the screen, and see its play log) andthe general staff members view (who just want to see if there is coffee in the pot).
Sometimes it makes sense to have multiple separate client applications, usually this happens if you want to service a specific section of functionality. It may also make sense for CafeNXT to have an application that allows their technicians to see which CoffeeNXTs in their area need servicing next, or maybe they would get better buy-in from the staff if they had a button to press on their phone to order a coffee.
An IoT platform is more than just the thing itself. When starting to build out the platform you should consider the thing API, rules engine, data stores, support portal and client applications. Because these together can be a daunting task, it is better to start simple and scale the complexly of the system as it grows, rather than start complex and have to support it while the product is getting off the ground.
IoT products are often invented to bring good — to make people’s lives better and easier; they are also often used to bring scale — making possible what was previously impossible due to being too manual and too expensive. When a great IoT product is combined with the right team and a great business model, then it has the opportunity to bring a lot of good to the world. If this is your motivation and you are looking for a long-term software partner to help you turn your great IoT product into a great business, then please get in touch — we love to chat about software and business.