Loading Events

Virtualization in the Age of AI: Building a Flexible Hybrid Cloud Foundation

Chicago, IL | Gibsons Italia | March 31, 2026

 

Moderator & Panel

Clayton Williams

HPE

GreenLake Sales Leader


LinkedIn

Lohith Deshpande

Elevance Health

Director Data Science & AI


LinkedIn

Athar Waqas

Electronic Arts (EA)

Director Enterprise Architecture


LinkedIn

Speaker

Troy Stevens

HPE

Morpheus Sales Specialist


LinkedIn

Executive Summary

Enterprises are accelerating efforts to operationalize AI, but progress is uneven due to gaps in data readiness, infrastructure strategy, and organizational alignment. While access to models and tooling has improved significantly, the limiting factor has shifted to how effectively organizations can integrate AI into core business processes. Leaders are recognizing that successful AI adoption requires disciplined execution across data, governance, and workflow integration rather than isolated experimentation.

A key inflection point is the transition from pilot initiatives to scalable production environments. Many organizations are reassessing early approaches that prioritized speed over structure and are now implementing more rigorous frameworks for defining use cases, measuring outcomes, and managing risk. This shift is driving closer collaboration between business and technical teams, with increased focus on aligning AI initiatives to tangible operational and financial impact.

At the same time, infrastructure and deployment strategies are evolving. Enterprises are moving away from one-size-fits-all cloud approaches and toward more deliberate hybrid models that balance performance, cost, and data sensitivity. The combination of foundational AI models with domain-specific solutions is emerging as a practical approach to delivering business value, reinforcing the importance of tailoring AI systems to specific enterprise contexts.

Key Themes

Key Themes

  • Operationalizing AI requires direct business alignment.
    AI initiatives are delivering value only when tied to specific business outcomes and embedded into existing workflows, rather than treated as standalone capabilities.
  • Data readiness remains the primary constraint.
    Inconsistent data quality, weak governance, and fragmented data environments continue to limit the effectiveness and scalability of AI deployments.
  • Shift from experimentation to execution discipline.
    Organizations are moving beyond proof-of-concept efforts toward structured approaches that prioritize KPIs, accountability, and repeatable deployment models.
  • Hybrid infrastructure as a strategic necessity.
    Enterprises are balancing cloud and on-premise environments to manage cost, latency, and regulatory requirements, particularly for data-intensive AI workloads.
  • Model strategy is becoming more modular.
    Combining large foundational models with smaller, specialized models enables more targeted, efficient, and adaptable solutions across business functions.

Actionable Takeaways for Enterprise Leaders

Actionable Takeaways for Enterprise Leaders

  • Prioritize use cases with clear business impact.
    Focus on initiatives that directly improve revenue, efficiency, or risk management rather than broad, exploratory AI efforts.
  • Establish measurable success criteria upfront.
    Define KPIs before deployment to ensure alignment between technical execution and business outcomes.
  • Invest in data governance and standardization.
    Improve data quality, consistency, and accessibility to support reliable and scalable AI performance.
  • Adopt a workload-specific infrastructure strategy.
    Determine where AI workloads should run based on cost, performance, and compliance requirements rather than defaulting to a single environment.
  • Integrate AI into existing workflows.
    Embed AI capabilities into current systems and processes to drive adoption and maximize operational value.
  • Develop a layered model approach.
    Use foundational models for broad capabilities and augment them with specialized models tailored to specific use cases.
  • Implement governance frameworks early.
    Address security, compliance, and risk management as part of the development lifecycle to avoid delays in scaling.
  • Align cross-functional teams around execution.
    Ensure business, data, and engineering teams operate with shared objectives and clear accountability for outcomes.
  • Continuously evaluate cost and performance trade-offs.
    Monitor infrastructure and model usage to optimize efficiency as AI workloads scale.

EVENT PHOTOS

Sponsor

Unlock your organization’s next phase of innovation with HPE Greenlake, the edge-to-cloud platform designed for the AI era. HPE Greenlake brings cloud agility to applications and data wherever they live, combining scalable infrastructure, built-in security, and intelligent operations. With deep expertise across AI, cloud, and networking, HPE helps enterprises turn data into insight, improve performance, and operate with greater speed and control. Backed by decades of innovation, HPE Greenlake enables organizations to modernize, scale, and lead with confidence. www.hpe.com/greenlake