This is a short description on implementing event processing networks using IBM Node-RED. Although designed for wiring Internet of Things (IoT), the programming model of Node-RED follows closely to that of an event processing network, making it an excellent open source tool to prototype an EPN. To install Node-RED, follow the installation guide on its website. Node-RED programming guide website is also a good resource.

We shall assume that we already have an EPN in mind to implement. In this tutorial, we will model the following EPN designed to detect traffic accidents.

EPN to detect traffic accidents

The basic flow of events is as follows: on-board unit (OBU) of a car detects a possible collision and sends out a “Possible Crash Event – OBU” event. The event is then enriched with more information from the “Vehicle Registration Database”, which is a global state element. When this event reaches the event processing agent “Compose Accident Info”, the agent will open up a spatial and temporal context window and wait for the image detection confirmation from the traffic camera. When a crash image from the same location and same time frame is detected, the EPA logs the accident in a database. Affected area is then calculated based on domain expertise / knowledge base and alert is sent out to the relevant dashboard and on-board units.

We implement the network in IBM Node-Red using websockets as event producers and channels, functions as processing agents and PostgreSQL database connection nodes as global states.

Node-RED representation of the EPN

Due to time constraint, we substituted a toggle switch in place of image detection nodes. When building the “Compose Accident Events,” we found that the “context” concept of Node-RED similar to the “stateful agent” concept in EPNs. Putting incoming message values in the context of a node allows it to remember it when the next message arrives.

We also found Node-RED to be able to connect to IBM Watson using node extensions, and could definitely serve as a future enhancement by using IBM Watson image recognition API.

Note: This post is based on the work done for KE5208 Sense Making and Insight Discovery CA project completed in November 2016.

Team members: Randy Phoa (A0135933A), Chan Chia Hui (A0135940H), Zay Yar Lin (A0090806E)