Loading Events

Event Recap: The Enterprise Compute Advantage: Enabling Agentic AI

Event Recap: The Enterprise Compute Advantage: Enabling Agentic AI

Moderator & Panel

Menaka Sundar

Hewlett Packard Enterprise

Vice President Enterprise Architecture


LinkedIn

Sagar Baliyara

NVIDIA

Global Account Manager - AI Software and Solutions


LinkedIn

Fotis Stamatelopoulos

Upwork

Senior Director of Engineering, Head of CX and Trust Engineering


LinkedIn

Jianpeng Mo

TikTok

Director of Engineering, TikTok Privacy


LinkedIn

Ramprasad Rai

JPMorgan Chase & Co.

Vice President, Platform Engineering


LinkedIn

Vijay Narayanan

U.S. Bank

Engineering Leader - VP


LinkedIn

Executive Summary

The discussion centered on how leading enterprises are operationalizing AI—from governance and risk frameworks to developer productivity and infrastructure modernization. Executives emphasized that AI adoption is not a linear technical challenge but an organizational one—requiring balance between innovation speed, regulatory compliance, and verifiable outcomes. Companies in highly regulated sectors such as finance and social platforms underscored the need for tight data governance, internal validation environments, and human-in-the-loop processes to ensure AI reliability. Meanwhile, digital-native firms and technology providers described accelerated deployment of agentic AI models, workflow automation, and private LLMs to improve efficiency and customer experience.

At a strategic level, participants agreed that the future of AI lies in hybrid and sovereign architectures—where organizations retain control over sensitive data while still leveraging the scale of public-cloud innovation. Success will depend on disciplined governance, model verification, and contextual fine-tuning rather than chasing frontier models. As enterprises standardize AI into daily operations, the next frontier is enabling workforce readiness—embedding training, guardrails, and verification directly into developer and data-engineering environments.

Key Themes

Key Themes

  • Governance as the foundation for scalable AI.
    Risk-averse industries are building layered governance models that evaluate AI use cases by cost-of-error and data sensitivity. Human-in-the-loop processes remain essential for actions that impact customers or regulated data.
  • Hybrid and sovereign architectures rising.
    Organizations are shifting from pure public-cloud reliance toward private or hybrid deployments. Fine-tuning small and open-weight models within secure VPCs is becoming a preferred path to maintain compliance and performance.
  • Agentic AI and contextual automation.
    Enterprises are moving from basic summarization and chatbots to multi-agent workflows that perform specialized, context-aware tasks such as fraud detection, customer-service triage, and policy compliance.
  • Developer productivity and AI-assisted engineering.
    Teams report measurable but variable productivity gains from AI coding assistants. The value depends on grounding outputs in internal systems and maintaining rigorous regression testing and verification pipelines.
  • Infrastructure modernization and ROI pressure.
    As innovation cycles shorten, leaders are re-evaluating infrastructure ROI—balancing rapid hardware advancement with compliance timelines. Energy efficiency, cost per watt, and continuous benchmarking are emerging priorities.

Actionable Takeaways for Enterprise Leaders

Actionable Takeaways for Enterprise Leaders

  • Embed AI governance early.
    Establish risk gradients for AI initiatives—defining which projects can be automated, which require human oversight, and which demand regulatory review before launch.
  • Adopt hybrid AI architectures.
    Combine public-cloud flexibility with private-data controls. Evaluate sovereign AI options or open-weight models fine-tuned on internal datasets to protect sensitive information.
  • Operationalize verification and reproducibility.
    Develop frameworks to test LLM reliability (e.g., RAG pipelines, regression validation) and integrate model-performance monitoring into CI/CD workflows.
  • Integrate AI into the developer workflow.
    Deploy internal copilots and compliance-aware coding assistants that surface real-time feedback on security and governance. Measure productivity by cycle-time reduction, not abstract percentages.
  • Invest in continuous upskilling.
    Redefine workforce enablement—move beyond formal training to embedded learning systems that make AI a daily tutor across engineering, operations, and compliance functions.
  • Prioritize ROI through use-case discipline.
    Focus AI investment where value can be proven—such as developer efficiency, risk mitigation, or customer-service deflection—before expanding to higher-risk domains.

EVENT PHOTOS

Sponsors

Unlock your boldest ambitions with HPE, your essential partner for the AI era. HPE uses the power of AI, cloud, and networking to help you move faster, work smarter, and achieve more. With deep expertise and bold ingenuity, we empower organizations to turn data into foresight, elevate performance, and drive real-world impact—at scale. Rooted in decades of innovation, we focus on helping companies adapt, grow, lead, and challenge the limits of what’s possible. www.hpe.com

NVIDIA Corporation is a market leader in visual computing technology dedicated to creating products that enhance the interactive experience on consumer and professional computing platforms. Its graphics and communications processors have broad market reach and are incorporated into a wide variety of computing platforms, including consumer PCs, enterprise PCs, notebook PCs, professional workstations, handhelds, and video game consoles. NVIDIA is headquartered in Santa Clara, California and employs more than 1,900 people worldwide. For more information, visit the Company’s Web site at www.nvidia.com.