At the SmartDev conference on May 20, 2021, Sber announced the expansion of the capabilities of SberCloud ML Space — an integrated cloud platform for the development and implementation of AI services.
It provides tools and resources for the development, training, and deployment of machine learning models - from fast connection to data sources to automated model deployment in the SberCloud infrastructure.
ML Space is a cloud-based service that enables distributed learning using the Intel® Xeon® scalable processor family with integrated AI accelerators. Its architecture is based on SberCloud's own supercomputer Christofari whose combined capacity amounts to 6.7 petaflops. It is ranked 40th in the top 500 most efficient systems in the world.
The expansion became possible through the use of oneAPI, an open standards-based cross-architectural programming model, which allows developers to leverage the performance and unique capabilities of various architectures without code fixing for each hardware platform. That is, they are free to choose the best equipment for a specific task. oneAPI supports well-known programming languages (such as C, C ++, Fortran and Python) and common standards (such as MPI and OpenMP), ensuring interoperability and close alignment with existing code.
The cloud platform SberCloud ML Space was created on the one hand in order to provide data processing specialists with the best tools for solving problems in the field of machine learning, and on the other hand in order to simplify and democratize to the fullest the developing and use of AI-based products. The new Intel oneAPI Toolkits fit perfectly into the ML Space ideology. Using this productive, flexible and cost-effective processor architecture, data scientists and ML developers can accelerate the creation and implementation of their AI products and improve their performance. I am sure that all relevant professionals will appreciate the new capabilities of our cloud platform.
Sberbank Group CTO, executive vice president, head of the technology block, Sberbank
ML Space brings together such popular big data tools as Jupyter Notebook and Jupyter Lab, and now it will integrate the instruments for higher productivity - Intel oneAPI Toolkits. It is built on a modular architecture that allows users to add new features on their own. Within a year of this announcement, anyone can register and obtain test access to the SberCloud ML Space platform, Intel® oneAPI Toolkits, and Intel® processors-based servers.
As the world moves into the XPU era, where a combination of architectures is used, the developers need new levels of abstraction to ensure the performance of baseline hardware. We're thrilled that Intel's oneAPI Toolkits will enable users of SberCloud AI platform to create viable software for various architectures using a single codebase.
Senior Vice President, Chief Architect and General Manager of Architecture, Graphics and Software at Intel Corporation
Intel oneAPI Toolkits help developers efficiently build, analyze, and optimize high-performance cross-architectural applications for a variety of XPUs: Intel processors, GPUs, and FPGAs.
These toolkits include the cross-architectural programming language oneAPI Data Parallel C ++ (DPC ++) and over 40 software products: compilers, libraries, and instruments for porting, analysis, and debugging that simplify the development of data processing applications.
One of the key elements of the ML Space cloud platform, the Environments module, will receive the following Intel oneAPI toolkits:
• Intel oneAPI Base Toolkit - the main set of tools and libraries for the development of high-performance, cross-architectural apps;
• Intel oneAPI AI Analytics Toolkit provides data scientists, and AI developers and researchers with familiar and convenient tools to accelerate data processing and analysis on Intel CPUs and GPUs;
• Intel oneAPI HPC Toolkit allows development and optimization of high-performance apps based on Fortran, OpenMP and MPI that can be scaled on the latest Intel processor-based systems and clusters. Together with the Base toolkit, it contains all the necessary tools for the development of high-performance applications for solving scientific or engineering problems in the systems with shared or distributed memory;
• Intel Distribution of OpenVINO Toolkit helps optimize, fine-tune, and launch complex inference using the Deep Learning Model Optimizer as well as execution and development tools.