Short-term future forecasting (months to years) is practical, because it could be reasonably accurate.
Mid-term future forecasting (many decades to centuries) is entertaining because it is most likely inaccurate, but we cannot argue with the author since none of us will be around to check.
Long-term future forecasting (thousands to tens of thousands of years) is plain crazy and meaningless. Forecasting forward for as long as mankind has existed seems completely meaningless, lacks scientific merit, and is for entertainment purposes only.
How about forecasting forward millions to billions of years?
The most interesting part of the video below is that it puts time in perspective by linking it to the amount of energy under our control and discussing the amount of time needed to reach each control level.
Previously, I have posted a summary of strange, interesting, and beautiful natural phenomena occurring in Colorado (link). Here is another one from May 27, 2017.
Thanks to Colorado resident Aidan Peairs, we now have this image of the Aurora Borealis over Longmont, Colorado.
This is quick update on the earlier post (link) that explained our experiment on testing our learning algorithm that runs on light and temperature signals that are fed from the Arduino device placed in the Alchemy IoT software developers’ room. It has been about 2 weeks since this experiment started, and this is a quick progress update.
The image below shows two things:
- In the row above, it shows what fraction of all the events was considered as “new anomalies” in the last month (although the test is only 2 weeks old), week, day, and hour. We do not control the events in the room much−we just walk around, turn the lights on and off, etc. The weather in Colorado has been pretty unpredictable recently, fluctuating from cold days with lots of clouds to hot sunny days, which also impacts the light sensor and the temperature sensor.
- In the row below, we can see the amount of learning done by the algorithm up to now. For example, over the last 2 weeks, it learned just 2.13% of anomalous events (165 out of 7740 detected anomalies) but the learning is accelerating, which can be seen in the weekly pareto (3.44% learned) and daily pareto (35.6%). The pareto for the last hour shows 100% learning efficiency, but this is just a small statistical dataset and should not be taken seriously since it changes rapidly with time.
- The plot below provides a snapshot of the detection and learning process. The lower “unschooled” index shows many more yellow dots, each of them representing a new detected anomaly event. The upper “learning” index shows fewer yellow dots; it interprets as many as 30% to 50% of them as “repeating” or “not novel” and ignores them.
Posted in AI, artificial intelligence, machine learning, deep learning, Analytics, data analytics, big data, big data analytics, data on the internet, data analytics meaning, Computers, Data Analysis and Visualization, IoT, Internet of things, smart connected devices, IoT analytics
Tagged arduino, cognitive analytics, learning analytics, not device