T-SOUL

Vol.20 Artificial Intelligence Discovering Values of IoT Data -Analytics towards Deep Learning-

Print

#04 Overview and Application Examples of Network Optimization Technology  Self-Growing Neural Networks Expand Data Analysis Fields  Takashi Morimoto Chief Chief Specialist, Deep Learning Technology Department, IoT Technology Center, Industrial ICT Solutions Company, Toshiba Corporation

Deep learning seems to be approaching practical use in industrial region where a prominent enhancement of accuracy is demanded even used for general-purpose. Nevertheless, it is not easy to devise mechanisms for appropriate learning and inference that solve user challenges through an expansion of analysis targets to IoT(Internet of Things) data, obtained from various sensors and machines as well as images and voice. This is because, in order to achieve the best result by deep learning, the neural network must be optimized through adjusting countless parameters such as the number of nodes and layers comprising a neural network. Therefore, Toshiba has established a technology which automatically builds an optimum neural network while it autonomously adjusts parameters. Toshiba has been obtaining satisfactory results after repeating verification experiments of the technology in various fields. This article introduces Toshiba's "Neural Network Optimization Technology," which can be called as a "third arrow," that opens doors in the initial barrier for the introduction of deep learning, together with the first and second arrows, namely, the "Automatic Deep Learning Environment" and "Parallel Distributed Learning Technology."

Neural Networks Becoming Deeper and Rapidly Advancing

The neural network that supports deep learning is a mechanism of information process emulating the mechanism of the brain. The neural network consists of three types of layers called the input layer, intermediate layer and output layer. Data to be analyzed is arithmetically processed while passing these layers.

Deep learning enhances the accuracy of learning models by increasing the number of intermediate layers. For example, when in learning features of data to determine whether those data are normal or abnormal, each layer consisting of mathematical models of neurons* adds a weighted value to the feature quantity received from the previous layer and extracts a new feature quantity for propagation to the next layer. At the same time, deep learning conducts learning process by processing the previous layers and subsequent layers in opposite directions with adjusting the parameters to minimize error in inference result.

* A mathematical model of neuron: A mathematical model that imitates a neuron which is a unit element of neural circuit.

The secret that enables high accuracy of deep learning is a deep structure of neural networks and a "ack-and-forth" information propagation. The use of deep learning will enable predictions and inference of future events by distinguishing images, discerning voice with high accuracy, and detecting objects in the same manner as humans classify, recognize, detect, and understand objects. Especially, deepening of neural networks has dramatically accelerated in the past several years. At an image recognition competition in 2012 sponsored by ImageNet, "AlexNet" consisting of eight intermediate layers achieved a recognition error rate of 16.4%, which is a high recognition accuracy rate that outdistanced conventional machine learning. This event triggered deep learning to appear in the limelight. Since then, deeper layer network such as "VGG" with 19 layers and "GoogLeNet" with 22 layers achieved drastic improvement of recognition accuracy in 2014. Indeed, in 2015, "ResNet" that featured an ultrahigh depth of 152 layers while maintaining a learning efficiency finally achieved a recognition error rate of 3.56% which surpassing that of humans.

Neural networks are deepened by increasing number of layers and are rapidly increased their accuracy. The advances of neural networks that are acquiring perception in a manner very similar to that of humans are, at the same time, also becoming a factor that makes it difficult to easily introduce and practically implement deep learning in the industrial region.

Click here to move to the top of this page.

Automated Optimization Technology Established Fully Utilizing General-Purpose Network

At present, deep learning is rapidly attracting high interest in the industry region. However, high hurdles have to be overcome before deep learning can be put into practical use.

Objects those to be analyzed in the industrial region varies widely from images and voice to various IoT data collected from sensors in time series. Analysis of these data are expected to achieve substantial results such as optimization of production lines, higher production yields, and stable operation of social infrastructure. Nevertheless, the more layers are needed in order to achieve the accuracy to solve these critical challenges, the more efforts are needed in order to refine the structures of neural networks and diverse parameters, such as the number of nodes and layers, must be adjusted in a precise manner which is a complicated task. This places a large burden on the engineers. Whether these purposes can be accomplished by using deep learning will ultimately depend on the capability and know-how of the experts and engineers who build neural networks.

In order to overcome this challenge, Toshiba has established a technology which automatically optimize a multilayered and complex neural network. First, we select a general-purpose and basic network (Fig. 1) in accordance with tasks and data. Learning is then started after setting minimum conditions. Hyper parameters such as the number of network layers and units (neurons) are then adjusted while repeated learning and evaluation of the learning results. This is an epoch-making technology that allows a neural network to grow to an optimal neural network autonomously while flexibly altering its configuration.

Fig.1 Major basic network used in optimizing networks

What will be critical in this process is whether the architecture has an ability to correctly select a basic network. This is because the required configurations for neural networks differ depending on the field in which neural networks are desired to be used. For example, a convolutional neural network ("CNN"), is used for an image process is required such as detection of moving objects and image process such as character recognition. On the other hand, a recurrent neural network ("RNN"), is used for a voice recognition and process of data in time series. Making the best use of its research on deep learning conducted for many years, Toshiba has accumulated the know-how of determining the optimal basic network that is suited to user challenges and to use it by combining plural basic networks. Toshiba has achieved results in many cases after repeating verifications of deep learning applications in optimizing networks.

Click here to move to the top of this page.

Driving Forward Verification Experiments of Deep Learning through Network Optimization

Toshiba classifies the values that can be provided to users through the use of deep learning into three values. Those are "recognition," "prediction/inference," and "control." (More information is contained in Article #01 .)
Toshiba is conducting verification experiments using deep learning through network optimization, in order to verify values that can be provided when analysis of groups of data is made. These groups of data are “data in time series” that records time variations of measured values of specified items and “cross-sectional data” that collects measured values of plural items at a specified point in time.

① "Data Abnormality" Detected by Learning Hidden Relationship

Adopted a basic network of the Auto Encoder (AE) type, data in time series such as temperature, humidity, and atmospheric pressure collected from sensors attached to parcels was analyzed. By having data of cold-storage parcels learned as normal values, relationships between temperature and humidity were found and these relationships were translated into models. This was used in discerning data abnormalities based on a relationship between temperature and humidity even if temperature data of normal parcels was falsified as temperature of cold-storage parcels. This shows that deep learning has an ability to detect data abnormalities from mere single sensor data analysis.

② Electric Power Output of Solar Power Generation in Rainy Weather Forecast with High Accuracy

Using a basic network of the fully connected (FC) type, Toshiba predicted the electric power output of solar power generation in rainy weather, which was difficult by the conventional analytical techniques. A weather forecast of the following day was input into a network optimized by learning a relationship between the weather forecast and the electric power output in the past. Compared with the conventional data analysis technique which showed a proportion of 32.3% of times at which errors occurred more than 20% in actual electric power outputs and predict electric power outputs, the new basic network reduced this proportion to 21.9%. This enables to predict an electric power output of solar power generation at high accuracy even in rainy weather. (Fig. 2)

Fig.2	Prediction of electric power output of solar power generation

③ Automated Detection of Abnormal Machine Parts

In this verification experiment, basic networks of the convolutional neural network (CNN) type and long short-term memory (LSTM) type with a higher convergence were combined. First, waveform patterns in a short period were modeled by CNN based on data of an electric current and acceleration under normal and abnormal conditions obtained from machine sensors. Next, a sequential order in which this waveform appeared was modeled by LSTM. Network optimization was accomplished through these two stages. As a result, Toshiba achieved a performance exceeding a precision of 90%. This made it possible to achieve the target rate for abnormality detection of motive power parts of machines without a design of feature quantities manually.

In addition to this experiment, Toshiba is also active in verifying deep learning fully utilizing its network optimization technology, including identification of causes for quality defects in production lines and improvement of power consumption efficiency at data centers. As for the basic neural network, which are the major elements of deep learning, Toshiba is actively undertaking the development of its own frameworks. A neural network that does not overlook even exceptional abnormalities and that thoroughly eliminates detection omissions and errors in the detection of abnormalities with high accuracy has been developed by combining networks of the Auto Encoder (AE) type and Generative Adversarial Network (GAN) type.

Expected accuracy cannot be achieved because it is not clear how numerous parameters should be adjusted. There is hesitation in the introduction of deep learning because of the high cost and man power associated with establishing learning models. In order to use deep learning in the industrial region, we believe that a partner of a high standing, capable of breaking through the present impasse in responding to these challenges of the customers is truly needed. Toshiba will continue its pioneering effort to further refine its network optimization technology so that many more customers will be able to implement deep learning easily at high accuracy in response to a variety of challenges.

* The corporate names, organization names, job titles and other names and titles appearing in this article are those as of March 2017.

Related articlesVol.20: Artificial Intelligence Discovering Values of IoT Data -Analytics towards Deep Learning-