2 mins read

Post Watson, IBM’s latest hardware and software to make it easier to train deep-learning systems

It was just five years ago when IBM’s supercomputer Watson crushed opponents in a televised quiz show Jeopardy. Hat time, no ibmone could predict that artificial intelligence will become an undeniable part of our daily life. Since then, IBM’s Watson brand was expanded to a cognitive computing package with hardware and software used to diagnose diseases, explore for oil and gas, run scientific computing models, and allow cars to drive autonomously. The company has now announced new AI hardware and software packages.

In the beginning, Watson was programmed to apply advanced algorithms and natural language interface to find and narrate answers. But today, Watson, the older day supercomputer and its AI systems are deployed at a grander scale. Mega data centers run by Facebook, Google, Amazon, and other companies use AI on thousands of servers to recognize images and speech and analyze loads of data.

IBM is a company that has more initiatives to bring artificial intelligence to other companies. Still today, it is engaged in introducing various powerful hardware to make deep-learning systems faster while analyzing data or finding answers to complex questions. As the recent development, the company is pairing those superfast systems with new software tools.

The latest IBM hardware, and software tools named PowerAI, are developed aiming train software to perform AI tasks like image and speech recognition. A compute with better learning and training, can deliver more accurate results. Such kind of training requires a big amount of computing horsepower, which is available now.

The first set of hardware is the Power8 server with the Nvidia Tesla GPUs, said Sumit Gupta, IBM’s vice president of high-performance computing and analytics. The hardware is the fastest deep-learning system available, Gupta said. The Power8 CPUs and Tesla P100 GPUs are among the fastest chips available, and both are linked via the NVLink interconnect, which outperforms PCI-Express 3.0. Nvidia’s GPUs power many deep-learning systems in companies like Google, Facebook, and Baidu.

“Performance is very important as deep learning training jobs run for days,” Gupta said. It’s also important to speed up key technologies like storage and networking, he said. The Power8 hardware is available via the Nimbix cloud, which provides bare metal access to hardware and an Infiniband backend.

The company is further planning, inferencing hardware and software, which requires lighter processing on the edge or end device. The functions of inferencing engines such as understanding and result making are performed on the basis of a trained model. It then adds the additional input or data to provide improvised results. . Drones, robots, and autonomous cars use inferencing engines for navigation, image recognition, or data analysis.

The next use of inferencing is in a data center to boost deep learning models. Various tech giants have created their own chips, for example, Google (TPU or tensor processing Unit), KnuEdge, GraphCore and Wave Computing. IBM is working on a different model for its inferencing hardware and software, Gupta said. He did not provide any further details.

The software is the glue that puts IBM’s AI hardware and software in a cohesive package. IBM has forked a version of the open-source Caffe deep-learning framework to function on its Power hardware. IBM is also supporting other frameworks like TensorFlow, Theano, and OpenBLAS.

The frameworks are sandboxes in which users can create and tweak parameters of a computer model that learns to solve a particular problem. Caffe is widely used for image recognition.

Article By Bharti Amlani