According to a Bitkom study, AI is considered a trend-setting future technology. But the actual use of AI technologies is progressing slowly. "Only eight percent of German companies invested in AI last year and only a meager four percent use AI productively," says a study by PwC.
The causes are manifold. They range from know-how deficits to acceptance problems to non-transparent technology offers, because most IT decision-makers have now realized one thing: standard systems are unsuitable for computationally intensive machine learning (ML) and deep learning (DL). On the other hand, it is precisely the ML/DL applications that generate the greatest benefit from AI.
Fraud analysis and automated trading systems for financial service providers.
Online retail personalization and product recommendations.
Monitoring systems for physical corporate security.
Geological analyzes for the exploration of gas and oil deposits.
Detect traffic anomalies to improve cybersecurity.
IT Operations Automation .
ML algorithms process a large number of matrix multiplication and accumulation floating point operations in parallel. In this respect, they are similar to the well-known image processing functions such as pixel shading and ray tracing. Consequently, graphics processing units (GPUs) are also better suited for ML than standard CPUs (central processing units). The first ML solutions were also based on off-the-shelf GPUs, but there are now new GPUs that are specifically tailored to ML workloads.
But not only the GPU is important, much more important is the optimal interaction of all hardware and software components, as is already offered with modern dedicated servers. Unfortunately, this system specialization has meant that the respective performance differences can no longer be explained with the basic technical data. Instead, benchmarks are needed in which the respective performance values are determined for precisely defined reference models. This then offers the possibility of objective comparison of individual systems, which is important for procurement.
MLPerf is one such benchmark that objectively measures and compares the performance of everyday ML problems. The test was developed by MLCommons, a consortium founded in 2018 by 50 leading AI companies and organizations. They include Google, Nvidia and the Chinese high-tech supplier Inspur.
The benchmark is carried out four times a year. More and more leading AI companies are participating in each test. In addition to Inspur, this also includes Nvidia, Intel, Qualcomm, Alibaba, Dell and HPE. Last year the tests Inference 1.0, Training 1.0, Inference 1.1 and Training 1.1 were carried out. For example, last year's test involved 14 organizations that submitted 186 results. The training consisted of eight practical tasks: image classification (computer vision), simple object recognition (faces, buildings), difficult object recognition (filtering out and masking dominant objects), image segmentation in medicine (3D objects), speech recognition (ASR), processing natural Language (NLP), recommendations (purchase, contacts) and learning support (strategy game GO).
The most recent MLPerf comparison test took place at the end of last year. Inspur came out on top in seven of the eight tests. Two servers specially optimized for AI were used. For example, the NF5688M6 was the top performer in the categories of natural language processing, difficult object recognition, recommendations and segmentation of medical images. The server model NF5488A5 was the top performer in image classification, simple object detection and speech recognition. Inspur's full-stack concept, which allows a significantly higher AI training speed, also played a major role in this. This allows Inspur's AI servers to process up to 27,400 frames per second.
On the system side, the outstanding performance of the Inspur servers is due to the close integration of hardware, software and system-level technologies. For example, PCIe Retimer Free Design enables high-speed connection between CPU and GPU, providing bottleneck-free IO transmission in AI training. The NF5488A5 is one of the first servers with Nvidia's A100 GPU. It is equipped with 8x 3rd Gen NVlink A100 GPUs and 2 AMD Milan CPUs and allows both liquid and air cooling. The NF5688M6 is an AI server optimized for large data centers with extreme scalability. It supports eight A100 GPUs, two Intel Ice Lake CPUs and up to 13 PCIe Gen4 IO expansion cards.
After more than three years of development, the MLPerf benchmark has established itself as a mature standard for evaluating the performance of various AI computing platforms in real scenarios.
The benchmark provides a transparent, repeatable, and effective way to directly compare AI performance across real-world, real-world scenarios. This makes it the most popular and widely used AI test scenario for all training and inference solutions. The AI users like to use it to evaluate the different AI platforms and to make their selection.
Inspur performed extremely well in the most recent MLPerf training course, impressively demonstrating the company's AI expertise. But Inspur is not only a leader in the reference models required for MLPerf. AI is an integral part of the entire portfolio, which consists of solutions for data center infrastructure, for cloud computing and solutions for open compute architecture.
Find out all about Inspur's AI and technology offerings here