Cloudera unveils AI Inference Service with embedded NVIDIA NIM microservices

Oct. 8, 2024
Cloudera AI Inference allows enterprises to harness their data to advance GenAI from pilot phases to full production.

Cloudera today launched Cloudera AI Inference powered by NVIDIA NIM microservices, part of the NVIDIA AI Enterprise platform. Cloudera AI Inference streamlines the deployment and management of large-scale AI models, allowing enterprises to harness their data to advance GenAI from pilot phases to full production.

Cloudera AI Inference protects sensitive data from leaking to non-private, vendor-hosted AI model services by providing secure development and deployment within enterprise control. Powered by NVIDIA technology, the service helps to build trusted data for trusted AI with high-performance speeds, enabling the efficient development of AI-driven chatbots, virtual assistants, and agentic applications impacting both productivity and new business growth.

Developers can build, customize, and deploy enterprise-grade LLMs with up to 36x faster performance using NVIDIA Tensor Core GPUs and nearly 4x throughput compared with CPUs. The seamless user experience integrates UI and APIs directly with NVIDIA NIM microservice containers, eliminating the need for command-line interfaces (CLI) and separate monitoring systems.

The service integration with Cloudera’s AI Model Registry also enhances security and governance by managing access controls for both model endpoints and operations. Users benefit from a unified platform where all models—whether LLM deployments or traditional models—are seamlessly managed under a single service.

Additional key features of Cloudera AI Inference include:

  • Advanced AI Capabilities: Utilize NVIDIA NIM microservices to optimize open-source LLMs, including LLama and Mistral, for cutting-edge advancements in natural language processing (NLP), computer vision, and other AI domains.
  • Hybrid Cloud & Privacy: Run workloads on prem or in the cloud, with VPC deployments for enhanced security and regulatory compliance.
  • Scalability & Monitoring: Rely on auto-scaling, high availability (HA), and real-time performance tracking to detect and correct issues, and deliver efficient resource management.
  • Open APIs & CI/CD Integration: Access standards-compliant APIs for model deployment, management, and monitoring for seamless integration with CI/CD pipelines and MLOps workflows.
  • Enterprise Security: Enforce model access with Service Accounts, Access Control, Lineage, and Auditing features.
  • Risk-Managed Deployment: Conduct A/B testing and canary rollouts for controlled model updates.

“Enterprises are eager to invest in GenAI, but it requires not only scalable data but also secure, compliant, and well-governed data,” said industry analyst, Sanjeev Mohan. “Productionizing AI at scale privately introduces complexity that DIY approaches struggle to address. Cloudera AI Inference bridges this gap by integrating advanced data management with NVIDIA's AI expertise, unlocking data's full potential while safeguarding it. With enterprise-grade security features like service accounts, access control, and audit, organizations can confidently protect their data and run workloads on prem or in the cloud, deploying AI models efficiently with the necessary flexibility and governance.”

“We are excited to collaborate with NVIDIA to bring Cloudera AI Inference to market, providing a single AI/ML platform that supports nearly all models and use cases so enterprises can both create powerful AI apps with our software and then run those performant AI apps in Cloudera as well,” said Dipto Chakravarty, Chief Product Officer at Cloudera. “With the integration of NVIDIA AI, which facilitates smarter decision-making through advanced performance, Cloudera is innovating on behalf of its customers by building trusted AI apps with trusted data at scale.”

“Enterprises today need to seamlessly integrate generative AI with their existing data infrastructure to drive business outcomes,” said Kari Briski, vice president of AI software, models and services at NVIDIA. “By incorporating NVIDIA NIM microservices into Cloudera's AI Inference platform, we're empowering developers to easily create trustworthy generative AI applications while fostering a self-sustaining AI data flywheel.”

These new capabilities will be unveiled at Cloudera's premier AI and data conference, Cloudera EVOLVE NY, taking place October 10.