Inspur Will Unveil Its New AI Supercomputer and AIStation at GTC 2017
SAN JOSE, Calif., May 8, 2017 /PRNewswire/ -- The GPU Technology Conference 2017 (GTC17) will be held from 9 to 11 May, in San Jose, USA. This year's GTC is set in 12 core themes, for example deep learning and artificial intelligence (AI), autonomous vehicles, virtual reality (VR) and augmented reality (AR), computer and machine vision, etc; setting up 654 small seminars, with 799 guests sharing their knowledge and experience on each of the GPU application field.
During GTC 2017, Inspur and NVIDIA and will jointly present a cutting-edge AI supercomputer, AGX-2, with the NVLink™ 2.0 enabled. The AGX-2 is designed to provide maximum throughput for superior application performance for science and engineering computing - taking AI computing to the next level. Inspur, as a Platinum Sponsor, will also present various types of AI application-oriented servers, such as SR-AI Rack, NF5280M5, NX5460M4, and AIStation, a complete AI deep learning cluster management system.
Below are the key highlights of the products displayed at Inspur booth (#911):
- SR-AI Rack – Highest Density of GPU System
The SR-AI Rack, jointly launched by Inspur and Baidu at Inspur Partner Forum (IPF) 2017 on April 26, is the world's first rack server that adopts a PCI-e Fabric interconnect architecture design. It breaks the traditional server GPU/CPU coupled architecture through connecting the upward CPU computing/scheduling node to the downward GPU Box using a PCI-E Switch. This allows the independent expansion of CPU's or GPU's, which eliminates the use of redundant parts when performing upgrades from conventional architecture. A single GPU Box in the SR-AI Rack supports up to 16 NVIDIA Tesla GPU cards, providing better expansion capacity than current mainstream GPU servers with 4 cards or 8 cards. The SR-AI Rack can also cascade up to 4 GPU Boxes via the PCI-E Switch for up to 64 GPUs in one daisy chain, for a gigantic compressing computing resource pool.
- NX5460M4 – Designed for Compute-Intensive and Mission Critical Tasks
The NX5460M4 is a high-performance blade server of Inspur I9000; a converged architecture of the blade server series specially optimized for Deep Learning applications, which supports a maximum of 8 Deep Learning computing nodes and 16 GPU accelerator cards in a 12U space, as well as high-density servers, 4- and 8- socket key business servers, software defined storage and multiple computing schemes. This includes heterogeneous computing aimed at providing commercial corporate customers with the Deep Learning infrastructure, featuring high reliability, and high performance.
- NF5280M5 – Interchangeable High Scalability AI Server
Based on the next generation Intel® Xeon® processor microarchitecture, Inspur's new NF5280M5 enterprise rackmount server is ideal for enterprise cloud application, distributed file system server, ERP and small and medium-scale database, virtualization and high end enterprise applications.
- AIStation - A Complete Cluster Management Software with Tools for Control, Analysis And Productivity
The AIStation is a complete AI deep learning cluster management system enabling simple and flexible deep learning, and makes it easy to train and combine popular model types across multiple GPUs and servers. It's also designed with speed, ease of control and efficient resource management in mind, allowing for a more flexible way to organize computation and puts the power of deep learning into the hands of engineers and data scientists.
- T-eye - An Analytics Software Application Developed by Inspur
It's mainly used to analyze how AI applications occupy hardware and system resources when working on GPU clusters, reflecting the operational characteristics, hotspots and bottlenecks of the application, thus helping users make targeted adjustments and optimizations in the application algorithm.
- Caffe-MPI – The World's First High-Performance MPI Version of Caffe
Caffe-MPI is an open source, clustered version of Caffe developed by Inspur, which enables Caffe, the industry's leading deep learning framework, to achieve efficient multi-node parallel learning. Caffe-MPI not only achieves better computational efficiency in standalone multi-GPU solutions, but also supports distributed cluster expansion.
During GTC 2017, Hu Leijun, Vice President of Inspur, will host a technical session to talk about "The Path to End to End AI Solution", and give live presentations on several AI-related topics, such as "SR-AI Rack: Hyper-scale GPU Expansion Box by Inspur & Baidu" and "A High Performance Caffe on the Multi-Node Cluster" at Inspur boo
Visit us at the NVIDIA GPU Technology Conference 2017, May 8~11, McEnery Convention Center in San Jose, California. For more information on Inspur's innovative GPU-accelerated solutions, visit www.inspursystems.com.
SOURCE Inspur Electronic Information Industry Co., Ltd
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article