Loading Events

Event Recap: Virtualization in the Age of AI: Building a Flexible Hybrid Cloud Foundation

Dallas, Tx | Del Frisco’s | March 25, 2026

 

Moderator & Panel

Paul Squyres

HPE

Greenlake Sales Director


LinkedIn

Ananth Hegde

JPMorgan Chase & Co.

Head of Data Engineering


LinkedIn

Saad Khan

Leader Solution Architect, Investment Banking, Senior IEEE Member, ex VP of JP Morgan


LinkedIn

Hari Kishan

Manulife.

Director of Cloud Engineering


LinkedIn

Venu Vidyashankar

Heartland Payments Systems

Leader - Enterprise Data Architecture


LinkedIn

Speaker

Hunter Nordyke

HPE

Hybrid Cloud Enterprise Architect


LinkedIn

Executive Summary

Enterprise IT leaders are navigating a structural shift in virtualization strategy driven by rising costs, vendor consolidation, and the growing demands of AI workloads. The traditional model of a single, dominant virtualization platform is breaking down, forcing organizations to reassess long-term dependencies and adopt more flexible, heterogeneous environments. While virtualization remains foundational, it is no longer sufficient on its own to support emerging workloads, particularly those driven by AI, which introduce new requirements around data locality, latency, and infrastructure design.

At the same time, organizations are balancing modernization with operational risk. Large enterprises with legacy systems are prioritizing incremental transformation, leveraging hybrid architectures that combine on-premise, cloud, and edge environments. This approach enables continuity while allowing teams to experiment with new platforms, AI capabilities, and cost optimization strategies. However, complexity is increasing as organizations manage multiple environments, governance models, and tooling layers simultaneously.

A clear trend is emerging toward platform diversification, cost awareness, and workload-specific architecture decisions. Enterprises are moving away from one-size-fits-all infrastructure strategies and instead aligning infrastructure choices to workload requirements, regulatory constraints, and financial outcomes. AI is accelerating this shift, exposing gaps in existing architectures and forcing organizations to rethink how and where workloads are deployed.

Key Themes

Key Themes

  • The virtualization reset and vendor reassessment.
    Rising costs and licensing changes are forcing organizations to reevaluate long-standing dependencies on single virtualization vendors, accelerating interest in alternative platforms and more flexible hybrid strategies.
  • Heterogeneous environments as the new standard.
    Enterprises are operating across legacy virtualization, containers, cloud services, and bare metal simultaneously, increasing complexity in governance, visibility, and day-to-day operations.
  • AI workloads redefining infrastructure requirements.
    AI introduces fundamentally different demands, including high data throughput, GPU dependency, and low-latency processing, requiring architectures that extend beyond traditional virtualization models.
  • Hybrid cloud as a practical operating model.
    Organizations are combining public cloud, private infrastructure, and edge deployments to balance performance, cost, and regulatory requirements, rather than pursuing full cloud migration.
  • Cost and FinOps becoming strategic capabilities.
    As AI and cloud usage expand, enterprises are formalizing FinOps practices to manage spend, optimize resource allocation, and evaluate infrastructure trade-offs with greater precision.

Actionable Takeaways for Enterprise Leaders

Actionable Takeaways for Enterprise Leaders

  • Audit and reassess virtualization dependencies.
    Evaluate licensing exposure, platform utilization, and feature adoption to identify opportunities to reduce cost and limit vendor lock-in.
  • Design for a multi-platform future.
    Build architectures that support interoperability across virtualization, containers, cloud, and bare metal to avoid rigid infrastructure decisions.
  • Align infrastructure decisions to workload requirements.
    Place workloads based on latency, data sensitivity, and performance needs rather than defaulting to cloud-first or on-prem-first strategies.
  • Introduce centralized governance across environments.
    Implement unified visibility, access control, and reporting layers to manage increasingly fragmented infrastructure landscapes.
  • Prioritize data locality and security for AI workloads.
    Keep sensitive data close to where it is generated and processed, minimizing unnecessary movement that increases cost and compliance risk.
  • Adopt FinOps early for AI and cloud initiatives.
    Establish cost monitoring, usage controls, and accountability frameworks before scaling workloads to prevent uncontrolled spend.
  • Start with targeted, high-impact use cases.
    Focus on AI applications that deliver measurable business value quickly, then scale based on proven outcomes.
  • Plan for latency-sensitive architectures.
    For real-time and customer-facing applications, invest in edge or on-prem solutions that meet strict performance requirements.
  • Leverage proven platforms to accelerate modernization.
    Where internal capabilities are limited, adopt established tools and infrastructure to reduce time-to-value and execution risk.

EVENT PHOTOS

Sponsors

Unlock your organization’s next phase of innovation with HPE Greenlake, the edge-to-cloud platform designed for the AI era. HPE Greenlake brings cloud agility to applications and data wherever they live, combining scalable infrastructure, built-in security, and intelligent operations. With deep expertise across AI, cloud, and networking, HPE helps enterprises turn data into insight, improve performance, and operate with greater speed and control. Backed by decades of innovation, HPE Greenlake enables organizations to modernize, scale, and lead with confidence. www.hpe.com/greenlake