This is quick update on the earlier post (link) that explained our experiment on testing our learning algorithm that runs on light and temperature signals that are fed from the Arduino device placed in the Alchemy IoT software developers’ room. It has been about 2 weeks since this experiment started, and this is a quick progress update.
The image below shows two things:
- In the row above, it shows what fraction of all the events was considered as “new anomalies” in the last month (although the test is only 2 weeks old), week, day, and hour. We do not control the events in the room much−we just walk around, turn the lights on and off, etc. The weather in Colorado has been pretty unpredictable recently, fluctuating from cold days with lots of clouds to hot sunny days, which also impacts the light sensor and the temperature sensor.
- In the row below, we can see the amount of learning done by the algorithm up to now. For example, over the last 2 weeks, it learned just 2.13% of anomalous events (165 out of 7740 detected anomalies) but the learning is accelerating, which can be seen in the weekly pareto (3.44% learned) and daily pareto (35.6%). The pareto for the last hour shows 100% learning efficiency, but this is just a small statistical dataset and should not be taken seriously since it changes rapidly with time.
- The plot below provides a snapshot of the detection and learning process. The lower “unschooled” index shows many more yellow dots, each of them representing a new detected anomaly event. The upper “learning” index shows fewer yellow dots; it interprets as many as 30% to 50% of them as “repeating” or “not novel” and ignores them.
Our goal for this experiment is to find out the learning ability limits for our algorithm in a realistic and pretty random environment.
P.S. Special thanks to Noel Lane and Nick Roseveare for setting up and conducting this experiment.