Siemens and the Ham-Let Group collaborated to develop a demo for the show that links Siemens MindSphere to a sensor system that tracks environmental conditions such as temperature, pressure, vibration, humidity, acoustics, and more. It uses a pressure and fluid flow, but adds a microphone to the mix. This allows the system to send audio to a nearby gateway where artificial-intelligence (AI) algorithms analyze acoustics. Over time, it learns what sounds good and those that indicate a problem or an unknown state. Placing a machine-learning (ML) AI system in the gateway minimizes communication to the cloud while also providing local control over the system if necessary.
MindSphere is a cloud-based platform-as-a-service (PaaS) design to support IIoT. Much of MindSphere operates in the cloud, but its software components also run on the gateway and edge nodes. It supports digital-twin technology, and the processing audio capability simply extends the amount of information that can be utilized by the digital twins. This also provides better insight into the current and future operation of the actual system.
Siemens and the Ham-Let Group aren’t the only companies listening to their customers’ devices. Analog Devices’ OtoSense software also employs machine learning. The firm showed it off a pair of their sensor systems that were monitoring two motors. One of the motors had an unbalanced load that generated a different vibrational and audio signature.
The OtoSense support is designed to work at the edge without requiring cloud support to handle AI chores. It can identify events and anomalies automatically with only an hour of self-training. Of course, tying an edge node to an OtoSense server makes it possible to monitor multiple devices while combining the learning from multiple sources. Analog Devices even has a two-hour trial for evaluating the system. It can target almost any device only needing the sensor system mounted on or near the device.
Read the source article in Electronic Design.