2 mins read

Fujitsu Develops New Deep Learning Technology To Analyze Time-Series Data With High Precision

Fujitsu Laboratories Ltd. today announced that it has developed deep learning technology(1) that can analyze fujitsu logotime-series data with a high degree of accuracy. Demonstrating promise for Internet-of-Things applications, time-series data can also be subject to severe volatility, making it difficult for people to discern patterns in the data.

Deep learning technology, which is attracting attention as a breakthrough in the advance of artificial intelligence, has achieved extremely high recognition accuracy with images and speech, but the types of data to which it can be applied is still limited. In particular, it has been difficult to accurately and automatically classify volatile time-series data – such as that taken from IoT devices – of which people have difficulty discerning patterns.

Now Fujitsu Laboratories has developed an approach to deep learning that uses advanced mathematical techniques to extract geometric features from time-series data, enabling highly accurate classification of volatile time-series.

In benchmark tests held at UC Irvine Machine Learning Repository(2) that classified time-series data captured from gyroscopes in wearable devices, the new technology was found to achieve roughly 85% accuracy, about a 25% improvement over existing technology.

This technology will be used in Fujitsu’s Human Centric AI Zinrai artificial intelligence technology.

Details of this technology will be presented at the Fujitsu North America Technology Forum (NAFT 2016), which will be held on Tuesday, February 16, in Santa Clara, California.

Background                                                                                                   In recent years, in the field of machine learning, which is a central technology in artificial intelligence, deep learning technology has been attracting attention as a way to automatically extract feature values needed to interpret and assess phenomena without rules being taught manually.

Especially in the IoT era, massive volumes of time-series data are being accumulated from devices. By applying deep learning to this data and classifying it with a high degree of accuracy, further analyses can be performed, holding the prospect that it will lead to the creation of new value and the opening of new business areas.

Issues                                                                                                             Deep learning is a potent machine learning technique, and it is attracting attention as a breakthrough in the progress of artificial intelligence, but so far it has only been able to be effectively applied to limited types of data, such as images and speech.

In particular, for complex time-series data that is subject to severe oscillations and captured by sensors embedded in IoT devices, it has so far been difficult to achieve highly accurate classifications using deep learning or any other machine learning techniques.