
[{"content":" About Me Latest Writing # Deploying a Supervisor in VCF 9: What Actually Matters A practical field-focused walkthrough of deploying a Supervisor in VMware Cloud Foundation 9, with emphasis on networking design, VPC versus NSX Classic, Tier-0 architecture, routing, VKS operations, and preparing the platform for namespaces and Supervisor Services. Deploying and Using the VMware Health and Security Toolkit (HST) A complete walkthrough of deploying the VMware Health and Security Toolkit appliance and performing infrastructure health and security assessments. Understanding Cost Modeling in VMware Cloud Foundation Operations A practical walkthrough of how VMware Cloud Foundation Operations models infrastructure cost, distributes that cost across workloads, and extends the model with software licensing. Certifications # All certifications are publicly verifiable via Credly ","date":"March 21, 2026","externalUrl":null,"permalink":"/","section":"","summary":" About Me Latest Writing # Deploying a Supervisor in VCF 9: What Actually Matters A practical field-focused walkthrough of deploying a Supervisor in VMware Cloud Foundation 9, with emphasis on networking design, VPC versus NSX Classic, Tier-0 architecture, routing, VKS operations, and preparing the platform for namespaces and Supervisor Services. Deploying and Using the VMware Health and Security Toolkit (HST) A complete walkthrough of deploying the VMware Health and Security Toolkit appliance and performing infrastructure health and security assessments. Understanding Cost Modeling in VMware Cloud Foundation Operations A practical walkthrough of how VMware Cloud Foundation Operations models infrastructure cost, distributes that cost across workloads, and extends the model with software licensing. Certifications # All certifications are publicly verifiable via Credly ","title":"","type":"page"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"Deploying a Supervisor in VMware Cloud Foundation 9 is often presented as a simple workflow inside the vSphere Client. In reality, the deployment wizard is the easy part. What actually determines success is the design work that happens before the wizard is ever opened.\nA Supervisor can deploy successfully and still fail to deliver a usable platform if the surrounding networking, routing, load balancing, storage, and automation decisions were not made intentionally. That becomes even more important when the goal is not just enabling Kubernetes for demonstration purposes, but building a real platform that can support namespaces, services, VMware vSphere Kubernetes Service, and modern consumption models in VCF Automation.\nThis article walks through what actually matters when deploying a Supervisor in VCF 9. Rather than focusing only on the wizard itself, the goal here is to explain the design decisions behind it, especially around networking, VPC versus NSX Classic, Tier-0 architecture, routing, and how those decisions affect where the platform can go afterward.\nAll screenshots used in this article have been sanitized to remove environment specific identifiers while preserving functionality and workflow.\nStarting Point: What the Supervisor Actually Is # At a high level, the Supervisor is what brings Kubernetes directly into vSphere. It introduces a Kubernetes control plane into the environment and allows administrators to create vSphere Namespaces that can provide compute, storage, and networking resources for Kubernetes workloads.\nThat makes the Supervisor the foundation for several important capabilities, including:\nvSphere Namespaces vSphere Pods VMware vSphere Kubernetes Service, or VKS Supervisor Services developer and tenant consumption through VCF Automation Before deployment begins, the vSphere Client already gives a strong indication that this is not just a feature toggle.\nThe Supervisor Management landing page highlights several prerequisite areas that must already be in place before deployment.\nContent Library # The Supervisor requires a content library containing the image artifacts used for Supervisor lifecycle operations. This is how the control plane components and related resources are staged and managed over time.\nNetwork Support # The platform explicitly calls out supported networking models. That reinforces a very important point: the networking model is not something to decide casually while clicking through the wizard. It must be selected as part of the platform design.\nHA and DRS # The target cluster must have vSphere HA enabled and DRS configured appropriately. The Supervisor is not simply deploying a single appliance. It is standing up a control plane that depends on proper cluster behavior for resiliency and placement.\nStorage Policy # Storage policies determine where the Supervisor control plane VMs, image cache, and related data land. Even though this often gets less attention than networking, it still affects placement and lifecycle behavior.\nvSphere Zones # The Supervisor can run in either a single-cluster model or in a zonal design. This influences availability design and how workloads will be spread later.\nLoad Balancer # A load balancer is a foundational part of the platform because ingress into Kubernetes workloads has to terminate somewhere. This is not optional if the goal is to expose services in a practical way.\nWhy This Page Matters # This page is easy to skim past, but it actually says a lot about what the Supervisor really is.\nThe Supervisor is not just enabling Kubernetes.\nIt is validating that your environment is ready to behave like a platform.\nThat is an important distinction, because many deployments succeed technically but still fall short operationally. The reason is usually that the prerequisites were treated like items in a checklist rather than part of an intentional design.\nWhat Actually Makes Up a Supervisor # It is also important to understand that the Supervisor is not just a UI workflow. It is made up of real infrastructure components working together:\nSupervisor control plane virtual machines NSX Manager when using NSX-backed networking a load balancer, typically Avi or another supported option depending on the model Those components work together to:\nprovide Kubernetes API access enable namespace-based consumption expose services and ingress paths integrate the platform with vSphere networking, storage, and authentication That is why deploying a Supervisor should be treated as infrastructure enablement, not just a configuration exercise.\nThe Most Important Decision: Choosing the Networking Model # Out of all the design choices involved in deploying the Supervisor, the networking model is the one that shapes the future of the platform the most.\nWhen activating the Supervisor, the main choices are:\nVCF Networking with VPC NSX Classic vSphere Distributed Switch, or VDS At first glance, this looks like a simple deployment setting. In reality, it is a platform strategy decision.\nThe model you choose affects:\nhow workloads are connected how namespaces consume networking whether modern application consumption patterns are supported how far the environment can go with VCF Automation what kinds of services can be delivered to developers later VCF Networking with VPC # The VPC-based model is the one most closely aligned to where VCF 9 is headed. It is designed around a more modern networking and platform consumption model.\nUsing VPC introduces constructs such as:\nprojects transit gateways VPC gateways external IP blocks private VPC subnets private transit gateway subnets connectivity profiles This is the networking model that best aligns with All Apps in VCF Automation.\nThat matters because All Apps is designed around delivering more than just traditional virtual machines. It is intended to support a broader platform model that includes modern application patterns, Kubernetes-backed consumption, and service-driven workflows.\nIn other words, if the long term goal is to build a platform for modern application delivery, VPC is not just another wizard option. It is the model that keeps that door open.\nNSX Classic # NSX Classic is still valid and still useful in many environments.\nIt aligns well with:\ntraditional routed networking direct IP reachability operational familiarity simpler mental models for some infrastructure teams There are environments where NSX Classic may still be the better fit, especially where operational constraints or policy requirements make a more traditional design desirable.\nHowever, there is an important tradeoff that needs to be clearly understood.\nIf you choose NSX Classic, you are removing yourself from the All Apps path and aligning more naturally to VM Apps.\nThat does not mean NSX Classic is wrong. It means the choice should be intentional.\nIf the goal is to stay closer to a mature VM-centric automation model, NSX Classic may still be perfectly appropriate.\nIf the goal is to move toward the broader All Apps direction in VCF Automation, then NSX Classic becomes a limiting choice.\nvSphere Distributed Switch, or VDS # The VDS model is the exception to most of the NSX discussion in this article.\nIf you choose the vSphere Distributed Switch model:\nNSX is not required for the Supervisor networking stack networking is handled through traditional vSphere constructs external routing and load balancing must be handled outside of NSX This can reduce complexity in some cases, but it comes with tradeoffs. It is not the path that aligns best with the richer NSX-backed platform and automation capabilities discussed throughout this article.\nSo while the statement that NSX must be designed first is true for NSX Classic and VPC, it is not universally true if the Supervisor is being deployed with VDS.\nThat is an important nuance.\nWhy This Choice Matters for VCF Automation # The networking model directly affects what kind of consumption model the platform can realistically support later.\nA helpful companion read on this topic is Allan Kjær’s article on the difference between All Apps and VM Apps:\nVCF Automation: Understanding the Difference Between All-Apps and VM-Apps\nAt a practical level, the distinction looks like this:\nVM Apps # VM Apps is centered on traditional VM-based delivery. It is mature, stable, and well suited for environments focused on virtual machines and classic enterprise workloads.\nAll Apps # All Apps is more application-centric and more forward-looking. It supports a broader set of workload types, including Kubernetes-backed platforms and modern application consumption patterns.\nThis is why the networking choice is so important.\nIf you do not choose VPC, you are removing the environment from the path that best supports All Apps.\nThat is not just a networking issue.\nIt is a platform capability issue.\nVCF Context: Management Domain vs Workload Domain # Before diving further into NSX and Supervisor deployment, it is important to clarify where this work is actually happening within a VCF environment.\nThis article assumes that a Workload Domain has already been created.\nIn VMware Cloud Foundation:\nThe Management Domain hosts core infrastructure components such as:\nvCenter Server NSX Manager SDDC Manager Workload Domains are where:\nNSX networking is consumed Tier-0 gateways are deployed for north-south traffic Supervisor clusters are enabled VKS workloads are run Because of this separation:\nYou are not typically deploying your Tier-0 gateway or Supervisor cluster in the Management Domain.\nInstead, those components are deployed within a Workload Domain that is designed to host application and platform workloads.\nThis distinction is important, because it reinforces that the Supervisor is part of the consumption layer, not the core management infrastructure.\nNSX Must Be Designed First for NSX-Based Supervisor Deployments # Before enabling the Supervisor, NSX must already be operational if the environment is using NSX-based networking, meaning either NSX Classic or VPC.\nThat foundation includes:\nEdge Nodes deployed Tier-0 Gateway configured north-south routing validated load balancing configured upstream connectivity confirmed The Supervisor depends heavily on this foundation.\nIf those pieces are not correct, the deployment may still complete, but the platform will not behave the way the team expects afterward.\nWhy This Matters in Practice # This aligns directly with real-world architecture patterns.\nWhen the Supervisor depends on NSX-backed networking, it also depends on:\nNSX Manager for control plane networking and policy Edge nodes for north-south traffic handling a functional load balancing path for workload exposure Without these components in place, the Supervisor may deploy successfully, but:\ningress will fail services will not be reachable namespaces will not behave the way consumers expect troubleshooting becomes reactive instead of intentional Reference Architecture: Centralized Connectivity with BGP # A strong supporting reference for the networking foundation is my colleague Sargon Khizeran’s article:\nConfiguring Centralized Connectivity Networking with BGP in VCF 9.0\nThat write-up is useful because it shows the plumbing that needs to exist before the Supervisor ever becomes part of the conversation, especially in environments where NSX-based connectivity and dynamic routing are being used.\nIt is a strong example of:\nEdge deployment Centralized Tier-0 design North-south connectivity preparation BGP as the routing model between NSX and the physical network That article is especially helpful for understanding the networking prerequisites that underpin a successful Supervisor deployment.\nTier-0 Gateway Design Matters More Than People Think # A major part of getting this right is the Tier-0 gateway design.\nIn the deployment shown here, the Tier-0 gateway is configured with Active Standby HA mode.\nThat is not a random preference. It is a design choice with real downstream impact.\nWhy Active/Standby Matters for VKS # If the environment is intended to support Kubernetes workloads through VKS, I strongly recommend using Active Standby instead of Active Active for the Tier-0 path supporting those workloads.\nThe reason is predictability.\nActive/Standby provides:\ndeterministic traffic flow simpler failover behavior a cleaner model for ingress and north-south services a better fit for stateful and service-driven traffic patterns Active/Active can work, but it introduces additional complexity that is unnecessary for many VKS-oriented designs.\nIf the Tier-0 will be used for VKS and Kubernetes-backed services, Active/Standby is the safer and more intentional choice.\nRouting: Static Routes Can Work, But BGP Is Still the Better Long Term Answer # In the example environment shown here, a static route was configured at the Tier-0.\nThis design uses a default route and next hop aligned to the Tier-0 uplinks, which is valid and can be perfectly acceptable in a tightly controlled environment.\nStatic routing has a few benefits:\nsimplicity explicit route behavior straightforward troubleshooting in smaller designs However, if the environment is intended to grow, or if operational efficiency and resiliency matter, BGP is still my recommendation.\nWhy I Prefer BGP # BGP provides:\ndynamic route advertisement cleaner failover behavior less manual overhead as the environment changes better alignment with scalable north-south routing Static routes can absolutely get the job done. But once namespaces, ingress, egress, and application exposure become more dynamic, BGP is usually the better operational choice.\nThat is one reason Sargon’s BGP-focused reference is such a good complement to this discussion.\nVPC Changes the Design Conversation # If the environment is using VCF Networking with VPC, the design discussion expands beyond just Edge nodes and a Tier-0.\nThat is because VPC introduces additional objects and relationships that must already make sense before the Supervisor can consume them properly.\nAt a high level, that means understanding:\nexternal connectivity transit gateway design external IP blocks private IP blocks private transit gateway ranges VPC connectivity profile behavior One of the important concepts in the VCF 9 and NSX VPC model is that the connectivity profile defines how VPCs consume outside connectivity. It effectively ties together:\nthe northbound path the IP blocks available for consumption the transit gateway relationship certain outbound behavior choices, including NAT-related handling where applicable So when a team says that they are using VPC, that is not enough by itself.\nThe more important question is whether the VPC plumbing has actually been designed correctly for how namespaces, services, and ingress will behave later.\nNAT Was a Real Architectural Tension # One of the most important discussions in this deployment path was around NAT and routability.\nIn environments with stricter security or policy constraints, NAT may be something teams want to avoid or minimize. That often makes NSX Classic more attractive because it aligns more naturally with directly routed designs.\nThe challenge is that the VPC model introduces constructs such as:\npublic and external IP handling private and transit subnet behavior translation-based exposure patterns in some workflows That creates a practical architectural tension:\nif you want the modern VPC model and the path toward All Apps, VPC matters if the environment strongly resists NAT or translation, the design needs to be approached carefully This is exactly why the networking choice is more than a technical checkbox.\nIt shapes the tradeoffs the platform will live with later.\nDefault CNI Behavior Matters Too # Even after the networking model is chosen at the infrastructure layer, networking decisions continue inside the Kubernetes layer.\nBy default, VKS uses Antrea as the Container Network Interface, or CNI.\nThat matters because Antrea provides:\npod-to-pod networking Kubernetes-native network policy enforcement observability and control at the Kubernetes layer This becomes relevant when troubleshooting cluster networking or planning policy behavior, because not everything is handled purely by the NSX infrastructure layer. There is also a Kubernetes-native networking layer operating inside the cluster.\nThe Supervisor Deployment Workflow # Once the architectural groundwork is in place, the deployment workflow itself becomes much easier to reason about.\nStep 1: Navigate to Supervisor Management # The first step is not actually inside the deployment wizard yet.\nFrom the vSphere Client, open the navigation menu in the top left, then select:\nSupervisor Management\nThis brings you to the area where Supervisor clusters are deployed and managed.\nThis step may seem simple, but it is worth calling out because it is the entry point into the entire platform workflow.\nWhy This Step Still Matters # Even though this is just navigation, it represents a transition point.\nBy the time you reach this screen, all of the earlier design decisions should already be made:\nnetworking model, whether VPC, NSX Classic, or VDS Tier-0 design and routing approach load balancing strategy IP planning for management and workload networks storage and availability expectations So the real question at this point is not:\nWhich option do I feel like choosing?\nIt is:\nWhich option aligns with the platform I have already designed?\nSupervisor Location: Zones Versus Cluster Deployment # The next major decision is where the Supervisor will run.\nThe Supervisor can be deployed using:\na vSphere Zones model a single-cluster model This matters because it affects availability design.\nA zonal design supports a higher availability posture at the cluster level.\nA single-cluster design is simpler, but availability behavior depends more directly on the cluster and host-level configuration.\nThe screenshot also highlights the control plane high-availability toggle. That setting affects how the control plane is deployed and has downstream effects on IP planning and operational expectations.\nStorage Policy Still Matters # Storage policy selection is not usually the most controversial part of the deployment, but it still matters.\nThis step determines storage policy choices for:\ncontrol plane VMs ephemeral disks image cache These settings should align with the storage design of the environment rather than being treated as defaults to accept blindly.\nEphemeral Versus Persistent Storage # It is also worth reinforcing the difference between ephemeral and persistent storage in Kubernetes-based platforms.\nEphemeral storage exists only for the lifetime of the pod or workload and is typically used for transient data such as logs, scratch space, or temporary runtime artifacts. Persistent storage is backed by vSphere storage policies and is intended for stateful applications that must retain data across restarts and rescheduling events. That distinction matters when planning storage classes and application behavior later.\nThe Management Network Is a Real Design Input, Not Just a Form Field # The management network step is one of the most practical and important parts of the deployment.\nA very important detail here is that the Supervisor requires five consecutive IP addresses on the management network.\nThose five addresses are used for:\nthe three control plane VMs one floating IP one reserved IP for upgrade behavior That is why, in the example discussed, a range such as:\n192.168.x.241 - 192.168.x.245 was used intentionally rather than just pulling a random address.\nThis also explains why navigating to one of those IPs later resulted in the expected Supervisor endpoint behavior.\nIn addition to the IP range itself, the management network step also depends on correct values for:\nsubnet mask default gateway DNS servers DNS search domain NTP If those are wrong, the platform may deploy but access and lifecycle behavior quickly become problematic.\nThe Workload Network Is Where Reachability Becomes Real # The workload network page is where the design begins to directly influence how workloads and services will actually be consumed.\nThis is where the deployment defines things such as:\nnamespace network CIDRs service CIDRs ingress CIDRs egress CIDRs NAT behavior One particularly important point from this workflow is the NAT mode behavior.\nIf NAT is disabled, the topology becomes more directly routed, and workload IPs can be reachable from external networks if upstream routing is aware of them.\nThat sounds attractive, but it also means:\nupstream routing must be designed properly network teams must understand the namespace and workload CIDRs ingress and egress expectations must be explicit This is why workload networking cannot be treated as a page you simply fill in during the wizard. It depends on the surrounding architecture already being correct.\nAdvanced Settings Are Small in the UI but Significant in Impact # The advanced settings page is easy to treat as a final checkbox step, but it still carries design implications.\nThis step includes things such as:\ncontrol plane sizing API server DNS name behavior configuration export options The control plane size matters because it influences how much Kubernetes capacity the Supervisor can support.\nAgain, this reinforces the broader point: enabling a Supervisor is a platform sizing decision, not just a feature enablement action.\nFinal Review: This Is Where the Design Shows Up # The final review page is where all of the earlier decisions become visible together.\nAt this stage, the deployment either reflects a well-thought-out architecture or it reveals that the wizard was filled out without enough design work behind it.\nThis is where good planning becomes obvious.\nAfter Deployment: The Supervisor Is Running, But the Platform Journey Is Just Beginning # Once deployment is complete, the Supervisor becomes visible in the vSphere Client inventory and can be validated from the Supervisor Management view.\nA running Supervisor is important, but it is not the finish line.\nThis is where many teams stop too early.\nA running Supervisor does not automatically equal a developer-ready platform. It simply means the control plane foundation now exists.\nWhat matters next is how that foundation is consumed.\nSupervisor Lifecycle Versus Workload Cluster Lifecycle # One of the most important operational points to understand is that the Supervisor and the workload clusters do not share the exact same lifecycle.\nUpdating the Supervisor does not automatically update the VKS clusters running on top of it.\nThat means:\nthe Supervisor can be upgraded successfully but a workload cluster can still be outdated application issues can still appear if the workload cluster version does not match the application requirements This separation is intentional.\nThe Supervisor is the platform control plane.\nThe VKS clusters are workload consumers with their own lifecycle path.\nOperationally, that means you should always validate workload cluster compatibility and lifecycle state after Supervisor upgrades rather than assuming everything above it moved automatically.\nAccessing the Supervisor Endpoint and Understanding the VCF CLI Page # Once the Supervisor is reachable, navigating to the endpoint presents the VCF Consumption CLI page.\nThis page is significant for a few reasons.\nFirst, it validates that the Supervisor endpoint is actually reachable.\nIt is also important to understand that the IP used to access this page is not just any of the control plane node addresses. The Supervisor is deployed with a range of five IPs, where one of those IPs is assigned as a floating IP (VIP) that represents the control plane.\nThis floating IP serves as the primary access point for:\nthe Kubernetes API the VCF CLI browser-based access to the Supervisor endpoint Because this IP is virtual and not tied to a single control plane node, it can move between nodes to maintain availability. This ensures a consistent and highly available endpoint for interacting with the platform.\nSecond, it makes it clear that the Supervisor is intended for real administrative and consumption workflows rather than being a hidden backend service.\nThe page provides:\nthe VCF CLI download checksum validation guidance extraction steps context creation commands context listing and usage commands That is important because it shows how VMware is positioning the VCF CLI as the primary modern interface for interacting with workloads on the Supervisor.\nThe basic flow shown here includes:\ndownload the CLI package validate the SHA256 checksum extract the package create a context against the Supervisor endpoint list available contexts set the current context This is a strong operational confirmation that the platform is not only running, but also exposing its intended access workflow.\nKubernetes Context Awareness Is Critical # One of the most common operational mistakes in VKS environments is using the wrong Kubernetes context.\nIn a VCF environment, you are often dealing with multiple layers of access, such as:\nthe Supervisor context namespace-related context workload cluster context If the wrong context is used, commands can fail in confusing ways, especially after upgrades or context regeneration.\nThat means it is important to:\nlist contexts intentionally switch contexts explicitly confirm the target before running cluster-level commands This is especially important after:\nSupervisor upgrades workload cluster creation kubeconfig updates switching between multiple clusters or environments Understanding context separation makes troubleshooting much more efficient and avoids a lot of unnecessary confusion.\nHow VKS Actually Works Under the Hood # VKS is not just Kubernetes running on a few VMs. It is built on multiple layers of control.\nThose layers include:\nVirtual Machine Service, which manages the lifecycle of the VM-based cluster nodes Cluster API, which provides declarative Kubernetes lifecycle management Cloud Provider integration, which connects Kubernetes behavior to vSphere infrastructure services This layered architecture enables:\ndeclarative cluster creation using YAML automated reconciliation lifecycle management through supported APIs instead of manual VM handling integration with vSphere networking and storage constructs That is one reason VKS should be thought of as a platform service, not just a collection of virtual machines.\nWhat Comes Next: Namespaces # At this point, the Supervisor is running and reachable, but it still is not a complete platform from a developer or tenant perspective.\nThe next major step is to create vSphere Namespaces.\nThat is where the platform starts becoming consumable.\nNamespaces are where you begin defining:\nwho can use the platform how much compute and memory they receive what storage is available what networking they can consume what services are exposed to them This is the point where the Supervisor begins transitioning from an infrastructure capability into an actual platform.\nContinuing Beyond Namespaces: Supervisor Services # After namespaces are in place, the next stage is extending the platform with Supervisor Services.\nA very useful reference for continuing this configuration is the Supervisor Services catalog:\nSupervisor Services Catalog\nThat catalog is valuable because it shows what can come next once the Supervisor and namespaces are already working, including examples such as:\nvSphere Kubernetes Service Local Consumption Interface Harbor Contour ExternalDNS Supervisor Management Proxy ArgoCD other platform services This is an important reminder that deploying the Supervisor is not the end of the story.\nIt is the starting point for building the platform that sits on top of it.\nHarbor as a Good Example of Platform Maturity # When Harbor is added as a Supervisor Service, it introduces useful security and operational capabilities such as:\nimage vulnerability scanning image signing and trust-oriented workflows That helps reinforce the broader theme of this article: once the Supervisor is working, the next stage is not just enabling more features. It is maturing the platform so that workloads can be delivered and operated more safely.\nFinal Thoughts # Deploying a Supervisor in VCF 9 is not difficult from a wizard perspective.\nWhat actually matters is everything around it.\nThat includes:\nunderstanding that the Supervisor is a platform dependency stack, not just a feature toggle choosing the right networking model deliberately recognizing that VPC is the path that supports All Apps in VCF Automation understanding that NSX Classic aligns more naturally to VM Apps remembering that VDS is the exception where NSX is not required designing the Tier-0 carefully if NSX is in scope using Active/Standby when the Tier-0 will support VKS workloads making a thoughtful choice between static routing and BGP, while recognizing BGP is often the better long term answer planning management and workload network inputs intentionally understanding the difference between Supervisor lifecycle and workload cluster lifecycle handling Kubernetes contexts carefully during real operations validating not just deployment success, but actual Supervisor endpoint usability continuing beyond the Supervisor into namespaces and services If the design is right, the deployment feels easy.\nIf the design is wrong, the deployment may still finish, but the platform will not deliver what the team expected.\nThat is what actually matters.\n","date":"March 21, 2026","externalUrl":null,"permalink":"/vcf/deploying-a-supervisor-in-vcf-9-what-actually-matters/","section":"VMware Cloud Foundation","summary":"Deploying a Supervisor in VMware Cloud Foundation 9 is often presented as a simple workflow inside the vSphere Client. In reality, the deployment wizard is the easy part. What actually determines success is the design work that happens before the wizard is ever opened.\nA Supervisor can deploy successfully and still fail to deliver a usable platform if the surrounding networking, routing, load balancing, storage, and automation decisions were not made intentionally. That becomes even more important when the goal is not just enabling Kubernetes for demonstration purposes, but building a real platform that can support namespaces, services, VMware vSphere Kubernetes Service, and modern consumption models in VCF Automation.\n","title":"Deploying a Supervisor in VCF 9: What Actually Matters","type":"vcf"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/nsx/","section":"Tags","summary":"","title":"NSX","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/supervisor/","section":"Tags","summary":"","title":"Supervisor","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/categories/vcf/","section":"Categories","summary":"","title":"VCF","type":"categories"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/vcf/","section":"Tags","summary":"","title":"VCF","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/vcf-automation/","section":"Tags","summary":"","title":"VCF Automation","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/vks/","section":"Tags","summary":"","title":"VKS","type":"tags"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/vmware/","section":"Tags","summary":"","title":"VMware","type":"tags"},{"content":" Field notes and architectural insights from real VMware Cloud Foundation deployments, including modernization efforts from VCF 5.x to VCF 9.\n","date":"March 21, 2026","externalUrl":null,"permalink":"/vcf/","section":"VMware Cloud Foundation","summary":" Field notes and architectural insights from real VMware Cloud Foundation deployments, including modernization efforts from VCF 5.x to VCF 9.\n","title":"VMware Cloud Foundation","type":"vcf"},{"content":"","date":"March 21, 2026","externalUrl":null,"permalink":"/tags/vpc/","section":"Tags","summary":"","title":"VPC","type":"tags"},{"content":"","date":"March 10, 2026","externalUrl":null,"permalink":"/tags/assessment/","section":"Tags","summary":"","title":"Assessment","type":"tags"},{"content":"The VMware Health and Security Toolkit (HST) is a platform provided by Broadcom that helps administrators perform health checks, configuration validation, and security assessments across VMware infrastructure environments.\nThe tool collects configuration data from components such as:\nvCenter Server ESXi Hosts Virtual Machines vSAN NSX SDDC Manager It then analyzes the collected data against security controls and best practices to identify potential risks or configuration issues.\nThis guide walks through the full process of deploying and using the HST appliance, including environment validation and reviewing generated security reports.\nAll screenshots used in this article have been sanitized to remove environment specific identifiers while preserving the functionality of the toolkit. Sensitive information such as IP addresses, hostnames, credentials, and infrastructure identifiers have been intentionally blurred.\nDownloading the HST Toolkit # The VMware Health and Security Toolkit can be downloaded from the Broadcom support portal.\nhttps://www.broadcom.com/support/oem/vmware-health-security-toolkit\nBefore downloading the toolkit you must accept the Broadcom license agreement.\nDeploying the HST Virtual Appliance # The toolkit is delivered as an OVA appliance and deployed through the Deploy OVF Template workflow in the vSphere Client.\nDeploy OVF Template # Upload the downloaded OVA file.\nOnce uploaded the appliance template appears in the deployment wizard.\nProvide Name and Folder # Specify the virtual machine name and inventory location.\nSelect Compute Resource # Choose the ESXi host or cluster where the appliance will run.\nReview Deployment Details # Verify the OVF template information.\nSelect Storage # Choose the datastore where the appliance disks will reside.\nConfigure Networking # Select the network that will provide connectivity to the environment.\nDNS Requirement # Before powering on the appliance it is recommended to create a DNS record for the HST virtual machine.\nThe toolkit is accessed through a web interface and using DNS allows administrators to access the system using a hostname instead of an IP address.\nRequirements:\n• Allocate 1 IP address for the HST appliance\n• Create a DNS A record for the hostname\n• Ensure forward DNS resolution is working\nExample:\nHostname\nhst.domain.local\nIP Address\n10.10.10.50\nOnce the appliance is powered on you can navigate to the toolkit in a browser using the hostname.\nExample:\nhttps://hst.domain.local\nCustomize Template # Configure credentials and networking parameters.\nComplete Deployment # Review the summary and finish the deployment.\nPowering On the Appliance # Once deployment finishes power on the appliance.\nInitial Login and License Agreement # After the appliance has powered on and finished booting, open a web browser and navigate to the hostname that was created in DNS for the HST appliance.\nFor example, using the DNS entry created earlier:\nhttps://hst.domain.local\nAccessing the toolkit using the hostname ensures proper DNS resolution and allows administrators to consistently reach the system without relying on direct IP addresses.\nWhen navigating to the URL for the first time, the Health and Security Toolkit login interface will appear.\nBefore using the platform, the Broadcom license agreement must be accepted. Once accepted, you can proceed with authentication and complete the offline login process.\nOffline Login and Registration # HST supports offline login which is common in isolated or restricted environments such as federal or air gapped infrastructure.\nTo obtain the required registration key navigate to the Broadcom Professional Services Tool Hub.\nhttps://pstoolhub.broadcom.com/#/login\nAfter authentication request an activation key for the Health and Security Toolkit. The key will be delivered to your email address.\nCompleting Authentication # Continue through the offline login process.\nOnce authentication is successful the HST dashboard becomes available.\nCreating an Assessment Project # Projects are used to organize health and security assessments.\nCreate Folder # Configure Project Details # Validating Infrastructure Targets # The toolkit validates connectivity to infrastructure components before the assessment can begin.\nWhen configuring validation settings the Host field should contain the FQDN of the vCenter Server not the ESXi host.\nEven though the field is labeled Host the toolkit connects through vCenter Server APIs to collect configuration data from the environment.\nCorrect example\nvcenter.domain.local\nIncorrect example\nesxi01.domain.local\nExample validation for NSX.\nWhen validating the NSX environment the toolkit requires the NSX Manager Cluster VIP.\nTo locate the NSX VIP\n1 Log into the NSX Manager UI\n2 Navigate to System\n3 Select Appliances\n4 Locate the Cluster VIP\n5 Copy the value and paste it into the validation field\nValidation for SDDC Manager.\nRunning the Assessment # Submit the project to begin data collection.\nSubmission begins.\nThe toolkit begins collecting infrastructure data.\nWhen finished the results dashboard becomes available.\nReviewing Assessment Results # Health Analyzer # Administrators can download the Health Check Word report which contains\nExecutive Summary\nHealth Check Background\nMajor Findings and Recommendations\nHealth Check Assessment Results\nAppendices and inventory\nSecurity Assessment # The Security Assessment module evaluates infrastructure configurations against security best practices and generates detailed findings across multiple VMware platforms.\nAccess to the Security Assessment module is controlled through Broadcom internal access groups. Users must be added to the appropriate group before they can request activation keys or use the module.\nFor this reason the toolkit is most commonly used by Broadcom Professional Services teams who can obtain the necessary access through the proper internal channels.\nGenerated Reports # Executive Report # Administrative Report # The Administrative Report for the Security Assessment module is downloaded as an HTML file.\nWhen opened, the report loads directly in a web browser and provides a structured and easy to navigate view of the assessment findings. This format allows administrators to review results, drill into individual controls, and reference remediation guidance without needing additional software.\nTo demonstrate how these reports render in a browser, the following example administrative reports from a sandbox lab environment are provided below. These reports were generated during testing in July 2025 using HST version 1.0 and are included strictly as non production examples.\n👉 Download the vCenter Administrative Report (Example Lab Report)\n👉 Download the NSX Administrative Report (Example Lab Report)\n👉 Download the SDDC Manager Administrative Report (Example Lab Report)\nConclusion # The VMware Health and Security Toolkit provides administrators with a powerful method for evaluating the health and security posture of VMware infrastructure environments.\nBy automating configuration validation across vCenter ESXi NSX vSAN and SDDC Manager administrators can quickly identify configuration issues and security risks while receiving actionable remediation guidance.\nReferences # Broadcom. (n.d.). VMware Health and Security Toolkit.\nhttps://www.broadcom.com/support/oem/vmware-health-security-toolkit\nBroadcom. (n.d.). Professional Services Tool Hub.\nhttps://pstoolhub.broadcom.com/#/login\nNational Institute of Standards and Technology. (2018).\nFramework for Improving Critical Infrastructure Cybersecurity (Version 1.1).\nhttps://www.nist.gov/cyberframework\nNational Institute of Standards and Technology. (2020).\nSecurity and Privacy Controls for Information Systems and Organizations (SP 800-53 Rev. 5).\nhttps://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final\nVMware. (n.d.). Security Hardening Guides.\nhttps://www.vmware.com/resources/hardening-guides\n","date":"March 10, 2026","externalUrl":null,"permalink":"/vcf/deploying-and-using-the-vmware-health-and-security-toolkit-hst/","section":"VMware Cloud Foundation","summary":"The VMware Health and Security Toolkit (HST) is a platform provided by Broadcom that helps administrators perform health checks, configuration validation, and security assessments across VMware infrastructure environments.\nThe tool collects configuration data from components such as:\nvCenter Server ESXi Hosts Virtual Machines vSAN NSX SDDC Manager It then analyzes the collected data against security controls and best practices to identify potential risks or configuration issues.\nThis guide walks through the full process of deploying and using the HST appliance, including environment validation and reviewing generated security reports.\n","title":"Deploying and Using the VMware Health and Security Toolkit (HST)","type":"vcf"},{"content":"","date":"March 10, 2026","externalUrl":null,"permalink":"/tags/infrastructure/","section":"Tags","summary":"","title":"Infrastructure","type":"tags"},{"content":"","date":"March 10, 2026","externalUrl":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":"","date":"March 7, 2026","externalUrl":null,"permalink":"/tags/cost-modeling/","section":"Tags","summary":"","title":"Cost Modeling","type":"tags"},{"content":"","date":"March 7, 2026","externalUrl":null,"permalink":"/tags/operations/","section":"Tags","summary":"","title":"Operations","type":"tags"},{"content":"","date":"March 7, 2026","externalUrl":null,"permalink":"/tags/showback/","section":"Tags","summary":"","title":"Showback","type":"tags"},{"content":"One of the most common questions in private cloud environments is how infrastructure cost translates into workload consumption. Organizations want to understand not only how much their platform costs to operate, but also which workloads are consuming the most resources and driving that cost.\nVMware Cloud Foundation Operations includes a cost modeling framework that estimates the Total Cost of Ownership for a private cloud platform and distributes that cost across workloads based on resource utilization. This allows administrators and leadership to visualize how infrastructure resources are consumed across the environment.\nIn many enterprise and federal environments, this information is used as a showback model rather than a chargeback billing system. Instead of directly billing departments for resource usage, the platform provides visibility into how infrastructure capacity is utilized and where optimization opportunities may exist.\nThis article walks through how cost modeling works within VMware Cloud Foundation Operations, including licensing capacity, cost drivers, cluster base rate calculations, workload showback, and extending the model to include application licensing costs.\nAll screenshots used in this article have been sanitized to remove environment specific identifiers while preserving the functionality of the dashboards.\nApplying Licenses in VMware Cloud Foundation 9 # Starting with VMware Cloud Foundation 9, licensing is managed through VCF Operations, which acts as the centralized licensing interface for the platform.\nAdministrators register their VCF Operations instance with the Broadcom Business Services Console, where license entitlements are assigned and applied to the environment. Once licensing is activated, the platform begins tracking the total licensed core capacity available across the infrastructure.\nFor readers interested in the full licensing workflow, VMware provides a detailed walkthrough of the registration and activation process in the following video.\nVideo: How to Apply VCF Licensing in VCF Operations\nOnce licensing has been applied, VMware Cloud Foundation Operations begins monitoring licensed core capacity and infrastructure consumption across the environment. This licensing capacity becomes the foundation for understanding how infrastructure resources and costs are modeled within the platform.\nLicensing Capacity and Consumption # Figure 1: VMware Cloud Foundation Operations licensing dashboard showing total licensed cores and current core consumption across the environment.\nThe licensing view provides visibility into the total licensed capacity and current utilization across the environment.\nIn this environment:\nTotal licensed cores: 1776 Currently consumed cores: 512 VMware Cloud Foundation licensing is typically purchased as a pool of cores. As hosts are added to clusters and workloads are deployed, those hosts consume from the licensed core capacity.\nThe licensing dashboard allows administrators to quickly determine how much of the licensed capacity is currently in use and how much headroom remains available for future infrastructure expansion.\nIt is important to understand that this system tracks capacity utilization, not the actual contract value of the licenses. The financial modeling of infrastructure cost is handled separately through cost drivers.\nLicense Usage Analytics # Figure 2: Usage analytics dashboard showing platform capacity utilization for VMware Cloud Foundation and vSAN resources.\nWhile the licensing dashboard provides visibility into total licensed core capacity, the usage analytics view offers additional insight into how infrastructure resources are consumed across the platform.\nThis dashboard shows capacity utilization for key components of the VMware Cloud Foundation environment, including:\nVMware Cloud Foundation compute capacity vSAN storage capacity overall platform resource utilization These analytics help administrators understand how much of the licensed platform capacity is currently being utilized and how much capacity remains available for future workloads.\nIn the example environment shown above, approximately 29 percent of licensed cores are currently in use, indicating that significant capacity remains available before additional licensing would be required.\nHow Core Licensing Is Calculated # VCF licensing is calculated based on the physical CPU cores installed in each ESXi host. Each physical core requires a corresponding license entitlement.\nTo determine how many cores are required for a host, multiply the number of CPUs by the number of cores per CPU.\nFormula\nTotal Licensed Cores per Host = Number of CPUs × Cores per CPU For example, consider a host with the following hardware configuration:\n2 physical CPUs 32 cores per CPU The required licensing would be:\n2 CPUs × 32 cores = 64 licensed cores If this host is added to a cluster managed by VMware Cloud Foundation, 64 cores would be consumed from the licensing pool.\nExample Cluster Consumption # If a cluster contains 8 hosts with the same hardware configuration:\n64 cores per host × 8 hosts = 512 licensed cores This would consume 512 cores from the available licensing pool, which matches the usage reflected in the environment shown earlier in the licensing dashboard.\nVisualizing Core Licensing Consumption # Before calculating license consumption across clusters, it can be helpful to visualize how cores are counted at the host level.\nEach ESXi host contributes its total physical CPU cores to the licensing pool.\n+---------------------+ | Host 1 | |---------------------| | CPU 1: 32 cores | | CPU 2: 32 cores | | | | Total: 64 cores | +---------------------+ +---------------------+ | Host 2 | |---------------------| | CPU 1: 32 cores | | CPU 2: 32 cores | | | | Total: 64 cores | +---------------------+ +---------------------+ | Host 3 | |---------------------| | CPU 1: 32 cores | | CPU 2: 32 cores | | | | Total: 64 cores | +---------------------+ As hosts are added to a cluster, the total licensed core consumption increases accordingly.\nFor example:\n64 cores per host × 8 hosts = 512 licensed cores This consumption is reflected in the VMware Cloud Foundation Operations licensing dashboard, where the total available core capacity is compared against the currently consumed cores across the environment.\nBecause licensing is tracked as a shared pool, administrators can easily determine how much licensed capacity remains available before additional hosts or clusters are deployed.\nWhy This Matters # Tracking core consumption at the host level allows administrators to understand:\nhow much licensed capacity is currently in use how much headroom remains for additional hosts how cluster expansion will impact license consumption This visibility is especially useful when planning future infrastructure growth or validating that sufficient licensing capacity exists before adding new hosts to the environment.\nPractical Tip for Administrators # When planning infrastructure growth, administrators should evaluate host hardware specifications and calculate the total number of cores that will be introduced into the environment before adding new hosts to a cluster.\nBecause VMware Cloud Foundation licensing is consumed at the host level, even a small cluster expansion can significantly impact total core consumption depending on the CPU configuration of the new hardware.\nPerforming these calculations ahead of time helps ensure that sufficient licensing capacity is available before deploying additional infrastructure.\nTotal Cost of Ownership Model # Figure 3: Total Cost of Ownership dashboard showing the aggregated monthly infrastructure cost modeled within VMware Cloud Foundation Operations.\nOnce licensing capacity and infrastructure resources are understood, the next step is determining how the platform translates those resources into an estimated infrastructure cost.\nVMware Cloud Foundation Operations accomplishes this through a Total Cost of Ownership model, which estimates the monthly cost required to operate the private cloud platform.\nRather than pulling billing data directly from procurement systems or financial contracts, the platform builds this model using configurable cost drivers that represent the major categories of infrastructure expense.\nThese cost drivers are combined to produce the total monthly infrastructure cost pool for the platform.\nOnce the total cost pool is established, VMware Cloud Foundation Operations distributes that cost across clusters and workloads based on resource capacity and utilization.\nThis allows administrators and leadership to understand how infrastructure cost is distributed throughout the environment, even when exact procurement values are not directly integrated into the platform.\nThe next step is defining the individual cost drivers that make up this infrastructure cost model.\nCost Driver Configuration # Figure 4: Example cost driver configuration used to define the infrastructure cost model.\nCost drivers represent the individual cost categories that contribute to the overall infrastructure operating cost.\nAdministrators define these values within VMware Cloud Foundation Operations as estimated monthly costs, allowing the platform to construct a realistic operating cost model for the environment.\nTypical cost drivers often include categories such as:\ncompute hardware storage infrastructure platform licensing network infrastructure maintenance contracts labor and operational support facilities and datacenter overhead Each cost driver contributes a portion of the total monthly cost pool.\nOnce these drivers are configured, VMware Cloud Foundation Operations aggregates them to determine the overall infrastructure cost that will be distributed across clusters and workloads.\nCost Driver Contribution to Total Infrastructure Cost # Figure 5: Breakdown of how each configured cost driver contributes to the overall infrastructure cost model.\nAfter cost drivers are defined, VMware Cloud Foundation Operations aggregates them to determine the total infrastructure operating cost.\nThe platform then visualizes how each driver contributes to that overall cost.\nIn many enterprise environments, certain drivers tend to dominate the cost model. Storage infrastructure is often one of the largest contributors due to the large capacity requirements associated with enterprise workloads.\nCompute hardware, licensing, and maintenance contracts typically represent additional major portions of the cost model.\nUnderstanding how these cost categories contribute to the total infrastructure cost helps administrators identify which areas of the environment represent the largest operational expense.\nThis visibility becomes particularly valuable when evaluating infrastructure growth, hardware refresh cycles, or potential optimization opportunities.\nCluster Cost Calculation and Resource Base Rates # Figure 6: Cluster cost configuration showing how infrastructure cost is translated into resource base rates for CPU and memory consumption.\nOnce the total infrastructure cost has been defined through cost drivers, VMware Cloud Foundation Operations must determine how that cost is distributed across the platform’s compute resources.\nThis is accomplished by translating the infrastructure cost pool into resource base rates for each cluster.\nThese base rates represent the cost of consuming infrastructure resources, such as CPU and memory, within a given cluster.\nTo perform this calculation, VMware Cloud Foundation Operations evaluates the total usable capacity of the cluster and distributes the cluster’s portion of the infrastructure cost across those resources.\nSeveral factors influence this calculation, including:\ntotal CPU capacity available in the cluster total memory capacity available in the cluster high availability reservations configured capacity buffers After these factors are considered, the platform calculates base rate metrics such as:\nCost per GHz of CPU Cost per GB of memory Cost per GB of storage These values represent the estimated infrastructure cost associated with consuming those resources.\nFor example, if a cluster has a defined infrastructure cost and a known amount of usable CPU capacity, VMware Cloud Foundation Operations can calculate the cost associated with each unit of CPU consumed by workloads.\nThis allows the platform to translate raw infrastructure capacity into measurable cost metrics.\nWhy Resource Base Rates Matter # Resource base rates serve as the foundation for translating infrastructure cost into workload cost.\nOnce base rates are established, the platform can estimate the cost of individual workloads by evaluating the amount of CPU, memory, and storage resources they consume.\nFor example, if the platform determines:\nCost per GHz of CPU = $X Cost per GB of memory = $Y Cost per GB of storage = $Z Then the estimated cost of a virtual machine can be calculated based on the resources allocated to that workload.\nEstimated VM Cost = (CPU Allocation × CPU Base Rate) + (Memory Allocation × Memory Base Rate) + (Storage Allocation × Storage Base Rate) While the exact internal calculations within VMware Cloud Foundation Operations are more complex, this simplified model illustrates how infrastructure cost is translated into workload cost.\nThis approach enables administrators to understand how infrastructure consumption directly influences the cost associated with operating specific workloads.\nDatacenter Cost Distribution # Figure 7: Datacenter-level cost distribution showing how infrastructure cost is allocated across clusters and workloads.\nAfter cluster base rates are calculated, VMware Cloud Foundation Operations distributes the infrastructure cost across datacenters and clusters based on their capacity and resource utilization.\nThis allows administrators to visualize how infrastructure cost is distributed across the broader environment.\nIn large environments with multiple clusters and datacenters, this visibility becomes extremely valuable. It allows infrastructure teams to quickly identify:\nwhich clusters represent the largest portion of operational cost where infrastructure resources are most heavily consumed how platform growth impacts total infrastructure cost This datacenter-level perspective helps organizations understand the broader operational footprint of their private cloud infrastructure.\nShowback vs Chargeback Cost Models # Before examining workload-level cost visibility, it is important to understand the difference between showback and chargeback cost models.\nBoth approaches attempt to associate infrastructure consumption with cost, but they serve different operational purposes.\nIn a chargeback model, departments or business units are billed directly for the infrastructure resources they consume. This approach treats the private cloud environment similarly to a public cloud provider, where resource consumption results in direct financial charges.\nIn contrast, a showback model focuses on visibility rather than billing.\nInstead of generating invoices, the platform provides insight into how infrastructure resources are consumed across the organization. This allows teams and leadership to understand the financial impact of their workloads without implementing a formal billing system.\nShowback models are commonly used in enterprise and federal environments where internal departments share infrastructure resources but are not billed directly for their usage.\nThis approach helps organizations understand:\nwhich workloads consume the most infrastructure resources how infrastructure capacity is utilized across platforms or departments where optimization opportunities may exist Once this model is in place, VMware Cloud Foundation Operations can provide detailed visibility into workload-level infrastructure consumption.\nWorkload Level Showback # Figure 8: Workload-level showback dashboard displaying VM resource allocation and projected infrastructure cost. Environment identifiers have been sanitized for publication.\nAfter infrastructure cost is translated into resource base rates, VMware Cloud Foundation Operations can begin estimating the cost of individual workloads.\nThis is where the cost model becomes most useful for administrators and leadership teams.\nThe showback dashboard provides workload-level visibility into how infrastructure resources are consumed across the environment. Instead of viewing cost only at the cluster or datacenter level, administrators can see how individual virtual machines contribute to the overall infrastructure cost.\nFor each workload, the dashboard can display metrics such as:\nallocated CPU resources allocated memory allocated storage capacity projected monthly infrastructure cost potential optimization savings Because the cost model is derived from cluster resource base rates, the platform can estimate how much infrastructure cost is associated with each workload based on the resources it consumes.\nThis provides valuable insight into how infrastructure capacity is being utilized across the environment.\nIdentifying High Cost Workloads # One of the most valuable capabilities of the showback dashboard is the ability to identify workloads that consume a disproportionate amount of infrastructure resources.\nIn large environments, hundreds or even thousands of virtual machines may be running across multiple clusters. Without a showback model, it can be difficult to determine which workloads are responsible for the largest share of resource consumption.\nBy viewing projected monthly infrastructure cost at the workload level, administrators can quickly identify:\nlarge virtual machines consuming significant CPU or memory resources workloads with excessive storage allocations systems that may be over-provisioned relative to their actual utilization This visibility allows infrastructure teams to begin identifying optimization opportunities that may reduce overall platform cost.\nFor example, a virtual machine with large CPU and memory allocations but consistently low utilization may represent an opportunity for rightsizing, which could reduce both infrastructure consumption and estimated operational cost.\nApplying the Model to Application Platforms # Workload-level showback becomes especially valuable when evaluating the infrastructure footprint of specific application platforms.\nFor example, in the environment shown earlier, workloads associated with the Splunk platform are hosted within the example datacenter environment alongside other infrastructure workloads. The identifiers used in this article have been sanitized to remove environment-specific naming conventions while preserving the functionality of the dashboards.\nBecause VMware Cloud Foundation Operations tracks resource consumption for each virtual machine, administrators can easily identify the infrastructure footprint associated with these systems.\nBy filtering workloads based on naming conventions, tags, or application groupings, administrators can estimate:\nthe total infrastructure resources consumed by a specific platform the projected infrastructure cost associated with that platform how the platform’s workload footprint compares to other systems in the environment However, infrastructure consumption alone does not always represent the full cost of operating a platform. Many enterprise applications include additional software licensing costs that must also be considered.\nExtending the Cost Model with Application Licensing # While the default VMware Cloud Foundation Operations cost model focuses primarily on infrastructure cost, many organizations also need to account for software licensing costs associated with the platforms running on that infrastructure.\nExamples may include:\nSplunk security monitoring platforms analytics platforms enterprise management tools These platforms often have licensing models that are separate from infrastructure resource consumption. Instead of being tied directly to CPU, memory, or storage usage, licensing may be calculated based on:\nper server or per VM licensing annual subscription licenses perpetual platform licensing flat operational costs VMware Cloud Foundation Operations allows administrators to incorporate these costs into the overall platform model by creating additional cost drivers.\nCreating Additional Cost Drivers # Figure 9: Creating additional cost drivers to represent application licensing costs.\nAdditional cost drivers can be created to represent licensing costs associated with specific application platforms.\nIn the example environment, the Splunk platform includes a licensing cost of:\n$1200 per server per VM per year Because VMware Cloud Foundation Operations models cost on a monthly basis, the annual value must first be converted into a monthly cost.\nFor example:\n$1200 per server per year ÷ 12 months = $100 per VM per month Once this monthly value is calculated, a new cost driver can be created representing the per-VM licensing cost associated with the platform.\nUsing Tags to Associate Workloads with Application Licensing # Figure 10: Associating application licensing cost drivers with workloads using tags.\nFor VMware Cloud Foundation Operations to apply this licensing cost correctly, the platform must be able to identify which virtual machines belong to the application platform.\nThis is accomplished using tags.\nIn this example, a tag named splunk is created and associated with the Splunk licensing cost driver. This tells VMware Cloud Foundation Operations that any virtual machine carrying this tag should have the additional licensing cost applied to it.\nTo complete the process, the same tag must also be applied to the appropriate virtual machines within the vSphere Client.\nAdministrators can assign the splunk tag directly to the virtual machines that belong to the Splunk platform. Once the tag is applied, VMware Cloud Foundation Operations can identify those workloads and include the additional licensing cost when calculating the total cost associated with those systems.\nThis allows the platform to represent the true operational cost of running the Splunk platform, including both infrastructure consumption and application licensing.\nModeling Flat Software Licensing Costs # It is important to recognize that not all software licensing costs are tied to individual workloads.\nSome platforms are licensed as flat operational costs, meaning the organization pays a fixed amount regardless of how many virtual machines are running the software.\nIn these cases, it may be more appropriate to add the software licensing cost directly to the infrastructure cost model without associating it with specific workloads.\nWhen modeling flat operational costs, administrators can create an additional cost driver without applying tags to workloads. Instead, the cost can be incorporated into the platform’s overall cost pool using custom properties or general cost driver configuration.\nThis approach allows VMware Cloud Foundation Operations to represent the total operational cost of the platform, including both infrastructure and software licensing expenses, without incorrectly tying those costs to specific virtual machines.\nUnderstanding the difference between per-workload licensing costs and flat operational licensing costs helps ensure that the cost model accurately reflects the financial structure of the environment.\nKey Takeaways # VMware Cloud Foundation Operations provides a powerful framework for understanding how infrastructure cost is distributed across workloads in a private cloud environment.\nBy combining licensing visibility, cost drivers, cluster base rate calculations, and workload level showback, administrators can gain a much clearer understanding of how infrastructure resources are consumed and where operational costs originate.\nKey concepts covered in this article include:\nlicensing capacity is based on physical CPU cores across ESXi hosts cost drivers define the infrastructure cost pool used by the platform cluster base rates translate infrastructure cost into resource cost metrics workload showback provides visibility into VM level infrastructure consumption additional cost drivers can extend the model to include application licensing costs When implemented effectively, this approach allows organizations to better understand the operational footprint of their private cloud platform and identify opportunities to optimize infrastructure utilization.\n","date":"March 7, 2026","externalUrl":null,"permalink":"/vcf/understanding-cost-modeling-in-vmware-cloud-foundation-operations/","section":"VMware Cloud Foundation","summary":"One of the most common questions in private cloud environments is how infrastructure cost translates into workload consumption. Organizations want to understand not only how much their platform costs to operate, but also which workloads are consuming the most resources and driving that cost.\nVMware Cloud Foundation Operations includes a cost modeling framework that estimates the Total Cost of Ownership for a private cloud platform and distributes that cost across workloads based on resource utilization. This allows administrators and leadership to visualize how infrastructure resources are consumed across the environment.\n","title":"Understanding Cost Modeling in VMware Cloud Foundation Operations","type":"vcf"},{"content":" Creating a Custom Executive Cost View in VCF Operations # Before building any meaningful dashboard in VCF Operations, it is critical to establish a structured data foundation. Dashboards are only as effective as the views that power them. Rather than relying on default widgets, this implementation begins by creating a reusable cost-focused List View that surfaces VM-level financial metrics in a structured and executive-ready format.\nThis custom view will:\nDisplay month-to-date spend Project monthly cost Show daily burn rate Surface effective daily CPU, memory, and storage usage Provide an aggregated summary total All screenshots in this article have been sanitized to remove environment-specific identifiers. Hostnames, cluster names, domain paths, and organizational labels have been blurred or replaced with generic identifiers to protect operational data.\nStep 1 — Navigate to Views # From the VCF Operations interface:\nInfrastructure Operations\n→ Dashboards and Reports\n→ Views\n→ Create\nNavigating to the Views configuration area within VCF Operations.\nStep 2 — Select the List View Type # When prompted to select a view type, choose:\nList\nA List View allows cost metrics to be presented in a structured tabular format, making it ideal for executive reporting and dashboard integration.\nSelecting List as the view type for structured cost reporting.\nStep 3 — Configure the View Name # Within the Name \u0026amp; Configuration screen, provide a descriptive name for the view.\nName: Executive VM Cost Detail\nOptionally add a description explaining the purpose of the view.\nThis name will appear when selecting the view in dashboards, reports, and list widgets.\nDefining the view name during the view creation process.\nStep 4 — Configure the Data Tab # Click Next to move to the Data tab.\nUnder Add Subject, select:\nVirtual Machine\nThe Subject defines which object type the metrics apply to. Because cost modeling is calculated at the VM level, selecting Virtual Machine ensures that financial metrics aggregate correctly and support VM-level drilldown.\nNext, open the Metrics selector and add the following cost metrics:\nMTD Total Cost Monthly Projected Total Cost Effective Daily Total Cost Then add supporting resource usage metrics:\nEffective Daily CPU Usage Effective Daily Memory Usage Effective Daily Storage Usage The cost metrics provide financial visibility while the usage metrics add operational context.\nSelecting cost and usage metrics from the metric picker.\nStep 5 — Configure Metric Transformations # Each metric must be configured intentionally to ensure the view produces accurate financial reporting.\nFor the cost metrics:\nMTD Total Cost\nMonthly Projected Total Cost\nEffective Daily Total Cost\nConfigure the following settings:\nUnits\nUse adapter-defined currency formatting.\nSort Order\nDescending\nThis ensures the highest cost virtual machines appear first.\nFirst Transformation\nLast\nSecond Transformation\nAbsolute Timestamp\nThese transformations ensure the most recent calculated value is displayed while preserving the correct reporting timestamp.\nExample metric configuration showing units, sorting, and transformation logic applied to a cost metric.\nStep 6 — Configure Usage Metrics # For the usage metrics:\nEffective Daily CPU Usage\nEffective Daily Memory Usage\nEffective Daily Storage Usage\nApply the following settings:\nUnits\nAuto\nSort Order\nNone\nThese metrics provide context and should not override cost-based ranking.\nFirst Transformation\nLast\nSecond Transformation\nAbsolute Timestamp\nStep 7 — Arrange Metric Order # Reorder the metrics in the Data panel so financial visibility is prioritized:\nMTD Total Cost Monthly Projected Total Cost Effective Daily Total Cost Effective Daily CPU Usage Effective Daily Memory Usage Effective Daily Storage Usage This ordering ensures financial metrics remain the primary focus while resource metrics provide supporting context.\nFinalized metric configuration within the Data panel.\nStep 8 — Leave Time Settings and Filter as Default # After completing the metric configuration in the Data tab, click Next to proceed through the remaining configuration screens.\nThe next two sections in the view configuration wizard are:\nTime Settings\nFilter\nFor this implementation, both of these sections can be left at their default configuration.\nTime Settings determines how data is evaluated across time windows. Because the metrics already use the Last transformation with Absolute Timestamp, the default settings correctly display the most recent calculated value.\nThe Filter section allows administrators to restrict which objects appear in the view. Since this view is intended to support dashboards and reporting across the environment, leaving the filter unset ensures the view evaluates all virtual machines within the selected scope.\nClick Next through both sections to proceed to the Summary tab.\nStep 9 — Configure the Summary Tab # After completing the Data configuration, navigate to the Summary tab.\nClick Add Summary to create an aggregated row at the bottom of the list view.\nSet the aggregation type to:\nSum\nThis configuration instructs VCF Operations to calculate the cumulative total for each financial metric displayed in the view.\nBecause the cost metrics represent monetary values, using Sum provides leadership with an immediate understanding of the total cost footprint across all virtual machines in scope.\nConfiguring the Summary tab to generate aggregated totals across all VM cost metrics.\nThe summary row becomes especially valuable when the view is embedded inside dashboards or exported into reports.\nStep 10 — Configure Preview Source # Before saving the view, validate the data output using the Preview Source feature.\nOn the right side of the page, locate the dropdown next to Preview Source and click:\nSelect Preview Source\nSelecting the Preview Source dropdown to choose an object for validating the view output.\nNext, select an object that contains virtual machines such as a cluster or VCF instance.\nSelecting a preview object to validate the VM-level cost metrics.\nOnce selected, the preview pane populates with live data and confirms that:\nCost metrics populate correctly Sorting is applied correctly Transformations display the latest calculated values The summary aggregation appears at the bottom of the view Previewing the data ensures the view behaves correctly before saving and embedding it into dashboards.\nResult # At this stage, you have created a reusable cost-focused List View that:\nSurfaces VM-level financial metrics Applies consistent transformation logic Sorts by highest spend first Includes an aggregated summary row Provides contextual resource usage metrics When a preview object is selected, the view displays VM-level cost data along with the aggregated totals at the bottom of the table.\nExample output of the Executive VM Cost Detail view after selecting a preview object.\nThis view now serves as the data foundation for building an executive cost dashboard in VCF Operations.\nBuilding the Executive Cost Dashboard — Scoreboard Layer # With the reusable cost view complete, the next step is to design the executive dashboard. The first component we will configure is the Scoreboard widget, which provides a high level financial snapshot across domains.\nThe purpose of this layer is to answer two executive questions immediately:\nHow much are we spending this month? What is our daily burn rate? Rather than presenting raw metrics, we structure the scoreboard to compare management and workload domains side by side.\nStep 1 — Create a New Dashboard # Navigate to:\nInfrastructure Operations\n→ Dashboards\n→ Create\nAssign a meaningful name such as:\nExecutive Cost Dashboard\nThis dashboard will aggregate cost visibility across selected clusters.\nCreating a new dashboard for executive cost visibility.\nStep 2 — Open the Dashboard Canvas # After creating the dashboard, the blank dashboard canvas will appear.\nThis is where widgets will be added to build the executive cost view.\nBlank dashboard canvas ready for widget configuration.\nStep 3 — Add the Scoreboard Widget # From the widget library, drag the Scoreboard widget onto the canvas.\nPosition it at the top and expand it to full width. This establishes the financial summary layer of the dashboard.\nAdding the Scoreboard widget to the dashboard.\nStep 4 — Enable Self Provider and Display Object Name # Click the pencil icon on the Scoreboard widget to open the widget configuration panel.\nWithin the configuration panel, locate the Self Provider option and set it to:\nSelf Provider → On\nEnabling Self Provider allows the widget to directly query objects and metrics from the environment. Once enabled, the Input Data tab becomes available and can be used to add the cost metrics that will populate the scoreboard.\nNext, locate the Show dropdown on the right side of the configuration panel.\nSelect:\nShow → Object Name\nThis adds the Object Name column alongside the default selections such as Metric Name and Metric Unit.\nDisplaying the Object Name provides important context so viewers can immediately see which domain or cluster the cost values belong to. This ensures the scoreboard clearly distinguishes between the Management Domain and the Workload Domain when presenting cost totals.\nOpening the widget configuration panel, enabling Self Provider, and configuring the Show field to include Object Name.\nStep 5 — Add Cluster Cost Metrics # Under Input Data, select:\nMetrics → Add\nThis opens the Add New Metrics dialog.\nIn the filter field at the top of the object list, type:\ncluster compute\nFiltering by cluster compute quickly narrows the object list to the clusters in the environment so they can be selected for cost reporting.\nNext, select the cluster objects representing the infrastructure domains and expand the Cost metric category.\nChoose the following metrics:\nMonthly Cluster Total Cost Aggregated Daily Total Cost Repeat this process for both the Management Domain cluster and the Workload Domain cluster.\nThese metrics provide the two financial indicators displayed on the scoreboard:\nMonthly infrastructure spend Daily burn rate Filtering for cluster compute objects and selecting cost metrics for the scoreboard.\nStep 6 — Rename Box Labels for Executive Clarity # Technical metric names can be difficult for leadership to interpret.\nRename the tiles to:\nMgmt – Monthly Spend Mgmt – Daily Burn Workload – Monthly Spend Workload – Daily Burn Renaming scoreboard labels for executive clarity.\nStep 7 — Validate Scoreboard Layout # At this stage, the Scoreboard widget should display four financial tiles representing the two primary infrastructure domains.\nThe tiles should include:\nTotal Monthly Spend – MGMT Daily Burn Rate – MGMT Total Monthly Spend – WLD1 Daily Burn Rate – WLD1 Each tile represents a domain-level financial metric derived from the cluster cost calculations.\nVerify that the values are displayed with the correct units:\nMonthly metrics display as US$/Month Daily metrics display as US$ Finalized scoreboard layout showing monthly spend and daily burn comparison between the management and workload domains.\nThe next step is to add trend intelligence to determine whether cost is stabilizing, increasing, or accelerating.\nAdding Cost Trend Intelligence # The Scoreboard provides a financial snapshot. However, static numbers alone do not tell the full story. Leadership needs to understand whether cost is stabilizing, increasing, or accelerating over time.\nTo introduce directional awareness, we add a Metric Chart widget to visualize monthly cost trends across domains.\nThis layer transforms cost reporting into financial intelligence.\nStep 1 — Add the Metric Chart Widget # From the widget library, drag the Metric Chart widget onto the dashboard canvas.\nPosition it directly below the Scoreboard and stretch it full width.\nAt this stage, the chart will appear blank because no input data has been configured yet.\nMetric Chart widget added to the dashboard before configuration.\nThis blank state is expected.\nStep 2 — Enter Edit Mode and Select Self Provider # Click the pencil icon on the Metric Chart widget to enter edit mode.\nOpening the Metric Chart configuration panel via the pencil icon.\nWithin the configuration panel:\nLocate the Self Provider option and select:\nSelf Provider → On\nWhen Self Provider is Off, the widget expects data from another source. Setting it to On allows the chart to directly query cluster-level metrics.\nStep 3 — Add Cluster-Level Cost Metrics # Under Input Data:\nSelect Metrics Click the + icon Adding new metrics.\nIn the metric picker:\nIn the filter field, type “cluster compute”\nSelect the Management Domain cluster\nExpand the Cost category\nChoose:\nMonthly Cluster Total Cost Repeat for the Workload Domain cluster.\nThis adds two domain-level cost lines to the chart.\nInput Data configuration.\nStep 4 — Show the Toolbar to Access Chart Controls # After selecting metrics, you may notice the chart still does not display as expected. By default, certain chart controls are hidden.\nClick Show Toolbar within the chart widget.\nEnabling the chart toolbar to access time and comparison controls.\nEnabling the toolbar reveals advanced options such as:\nDate controls Comparison settings Split chart configuration Step 5 — Configure the Time Range # Click the date selector within the toolbar.\nSet the time range to:\nLast 6 Months\nIf a Previous Period comparison is automatically enabled, you can remove it by clicking the “X” next to it. For executive reporting, side-by-side domain comparison is typically more valuable than previous-period overlays.\nConfiguring the chart to display the last six months of cost data.\nUsing a six-month window provides sufficient historical context to identify cost acceleration trends.\nStep 6 — Configure Split Chart or Combined View # Within the toolbar options, you will see the Split Charts setting.\nYou have two design options:\nOption 1 — Combined Chart\nBoth management and workload domains appear on the same chart.\nThis allows direct visual comparison between domains.\nOption 2 — Split Charts\nEach domain appears in its own chart panel.\nThis reduces visual overlap and isolates trend behavior per domain.\nFor executive comparison, keeping both lines on the same chart is often preferable. However, split charts can be useful in environments with large cost disparity between domains.\nFinal Result — Cost Trend Intelligence Layer\nWith all configuration complete, the chart now displays six months of financial movement across both domains.\nFinalized cost trend visualization showing management and workload domain comparison.\nOutcome\nThe Metric Chart now provides:\nHistorical cost visibility Domain-level spend comparison Acceleration or stabilization awareness Executive-friendly visualization At this stage, your dashboard answers:\nHow much are we spending?\nHow fast are we spending it?\nIs cost trending upward or stabilizing?\nThe final layer will incorporate detailed VM-level breakdown using the custom view created earlier.\nAdding Detailed Cost Breakdown with the Custom View # The Scoreboard provides a financial snapshot.\nThe Trend Chart provides directional intelligence.\nThe final layer delivers operational depth.\nTo allow leadership and engineering teams to drill into VM-level cost drivers, we now embed the custom cost view created earlier directly into the dashboard using a List View widget.\nThis connects executive visibility with detailed transparency.\nStep 1 — Add the List View Widget # From the widget library, drag the List View widget onto the dashboard canvas.\nPosition it beneath the Metric Chart and expand it to full width. This creates a natural flow from summary to trend to detail.\nAdding the List View widget to the dashboard canvas.\nStep 2 — Enter Edit Mode and Enable Self Provider # Click the pencil icon on the List View widget to open the configuration panel.\nWithin the configuration panel:\nLocate Self Provider and select:\nSelf Provider → On\nThis allows the widget to directly query object and view data without relying on another widget for input.\nEnabling Self Provider for the List View widget.\nStep 3 — Configure Input Data (Select the VCF Instance) # Under Input Data:\nClick the + icon Adding input data to define the object scope.\nThis opens the object selection dialog.\nSelecting the VCF Instance as the object source.\nInstead of selecting an individual cluster, select the VCF Instance.\nBy choosing the VCF Instance:\nBoth the Management Domain and Workload Domain are included All VMs across domains are evaluated The view aggregates cost across the full environment The Input Data defines which objects will be evaluated by the view. Selecting the VCF Instance ensures the dashboard provides complete financial visibility rather than a domain-specific subset.\nStep 4 — Configure Output Data (Select the Custom View) # Next, navigate to the Output Data section.\nClick the + icon.\nSelecting the custom Executive VM Cost Detail view under Output Data.\nIn the filter field, type:\nExecutive\nSelect:\nExecutive VM Cost Detail\nThis binds the List View widget to the custom cost view created earlier.\nIt is important to understand the separation:\nInput Data → Defines the objects (VCF Instance)\nOutput Data → Defines how those objects are displayed (Custom Cost View)\nThis design allows the same view to be reused across different scopes if needed.\nStep 5 — Save and Validate # Click Save to apply the configuration.\nThe List View now renders VM-level cost details across both domains, including:\nMTD Total Cost Monthly Projected Total Cost Effective Daily Total Cost Effective Daily CPU Usage Effective Daily Memory Usage Effective Daily Storage Usage Aggregated summary totals Finalized List View displaying VM-level cost breakdown across the VCF instance.\nResult — Executive Cost Dashboard (Complete)\nAt this stage, the dashboard contains three structured layers:\nScoreboard\nHigh-level monthly spend and daily burn\nTrend Chart\nSix-month domain comparison and cost movement\nList View\nFull VCF VM-level financial breakdown with summary totals\nThis layered architecture ensures:\nLeadership sees immediate financial impact Trend behavior is visible over time Engineers can identify high-cost workloads Aggregated totals are clearly displayed Governance discussions are data-backed The dashboard now moves beyond monitoring and into financial operational governance.\nCreating and Running the Executive Cost Report # The dashboard provides real-time visibility, but executive stakeholders often require a formal report for:\nBudget reviews Governance meetings Monthly financial reporting Program updates VCF Operations allows you to create a reusable report template using dashboards and views, then generate exportable artifacts in PDF or Excel format.\nThis section walks through that full lifecycle.\nStep 1 — Navigate to Reports # From the VCF Operations interface:\nInfrastructure Operations\n→ Dashboards and Reports\n→ Reports\nClick:\nCreate\nNavigating to the Reports section in VCF Operations.\nStep 2 — Create the Report Template # Provide a meaningful name such as:\nExecutive Cost Report\nIn the Report Content section, you can toggle between:\nDashboards Views This allows you to include both visual dashboards and structured list views in the same report.\nStep 3 — Add the Executive Dashboard # Toggle to:\nDashboards\nIn the search filter field, type:\nExecutive\nThis quickly locates the custom dashboard created earlier.\nDrag the Executive Cost Dashboard into the report layout pane.\nSearching for and dragging the Executive Cost Dashboard into the report layout.\nThis ensures the report includes:\nScoreboard summary Six-month cost trend visualization Embedded VM-level list breakdown Step 4 — Add the Custom Cost View # Next, toggle to:\nViews\nIn the search filter field, type:\nExecutive\nLocate:\nExecutive VM Cost Detail\nDrag this view into the report layout pane.\nDragging the custom Executive VM Cost Detail view into the report layout.\nIncluding the view separately ensures:\nA clean tabular cost breakdown Aggregated summary totals A structured printable format Your report layout now contains:\nExecutive Dashboard\nExecutive VM Cost Detail View\nClick Save to finalize the report template.\nRunning the Report\nWith the report template created, the next step is to generate an actual report instance.\nStep 5 — Run the Report Template # Locate the newly created Executive Cost Report.\nClick the three-dot menu (ellipsis) next to the report template.\nNewly created Executive Cost Report template.\nFrom the menu, select:\nRun\nSelecting Run from the report template options.\nStep 6 — Select the Object Scope # After clicking Run, an object selection dialog appears.\nSelecting the VCF Instance as the object scope.\nSelect:\nVCF Instance\nChoosing the VCF Instance ensures the report includes:\nManagement Domain clusters Workload Domain clusters All VMs across domains Aggregated financial totals Click OK to begin report generation.\nStep 7 — Access Generated Reports # After running the report, you will see a numeric hyperlink under Generated Reports.\nGenerated Reports counter indicating a new report instance.\nClick the numeric hyperlink (for example, “1”).\nThis takes you to the Generated Reports page.\nViewing generated report instances and their status.\nUnder Status, you should see:\nCompleted\nOnce the status is Completed, export options become available.\nStep 8 — Export as PDF or Excel # On the Generated Reports page, you will see export icons for:\nPDF Excel Click the PDF icon to download the executive-ready report.\nThe Excel option can be used for deeper financial analysis or reconciliation.\nFinal Output — Executive Cost Report\nWhen opening the generated PDF, it will contain:\nExecutive summary scoreboard Six-month trend analysis Detailed VM-level cost breakdown Aggregated totals Executive Cost Report rendered in PDF format.\nDetailed cost breakdown section within the exported report.\nEnd Result\nYou have now built a complete cost governance workflow inside VCF Operations:\nCustom cost-focused List View Layered executive dashboard Reusable report template Exportable PDF and Excel artifacts This design supports:\nReal-time operational visibility Executive financial reporting Cross-domain cost comparison Program-level accountability All screenshots in this article were sanitized to remove environment-specific identifiers while preserving the configuration workflow.\n","date":"March 3, 2026","externalUrl":null,"permalink":"/vcf/building-an-executive-cost-dashboard-in-vcf-operations/","section":"VMware Cloud Foundation","summary":"Creating a Custom Executive Cost View in VCF Operations # Before building any meaningful dashboard in VCF Operations, it is critical to establish a structured data foundation. Dashboards are only as effective as the views that power them. Rather than relying on default widgets, this implementation begins by creating a reusable cost-focused List View that surfaces VM-level financial metrics in a structured and executive-ready format.\n","title":"Building an Executive Cost Dashboard in VCF Operations","type":"vcf"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/cost-governance/","section":"Tags","summary":"","title":"Cost Governance","type":"tags"},{"content":"When customers talk about upgrading VMware Cloud Foundation, it sounds simple.\nJust upgrade to the latest version.\nIn reality, moving from VCF 5.2 to VCF 9 is not an upgrade. It is an architectural transition.\nOver the past several months, I’ve had the opportunity to support that transition in a federal environment. Here are the biggest lessons.\nThe Greenfield Question # When I stepped into this engagement, the environment was running VCF 5.2.\nThe first real decision was not technical. It was architectural.\nDo we attempt an in place upgrade?\nOr do we redeploy clean with VCF 9?\nThat decision shapes everything that follows.\nOn paper, an in place upgrade sounds efficient. Keep what you have. Upgrade components. Move forward.\nIn reality, architecture carries history.\nArchitecture decisions do not disappear during upgrades.\nThey compound.\nArchitecture Carries History # VCF 5.2 was built around a management model that looks very different from VCF 9.\nIn many environments, that 5.2 management stack included SDDC Manager, vCenter, NSX, Aria Suite Lifecycle Manager, Aria Operations, Aria Automation, Aria Operations for Logs, Aria Operations for Networks, and Identity Manager.\nAria Suite Lifecycle Manager was responsible for deploying and managing the Aria products. You could deploy Aria independently of VCF, or you could integrate it as VCF aware. If you chose the VCF aware model, you needed to deploy an Application Virtual Network so SDDC Manager could communicate with Aria Suite Lifecycle.\nThat AVN introduced additional IP requirements, DNS entries, and routing considerations across Region A and X Region segments, plus long term lifecycle dependencies. If Aria was deployed independently, the AVN was not required. Once you tightly coupled Aria to VCF, that dependency became part of the design.\nThe flexibility was powerful, but it created layers.\nVCF 9 shifts the model.\nVCF Installer replaces Cloud Builder for bring up. It deploys vCenter, NSX, and SDDC Manager. SDDC Manager still exists in VCF 9, but it is considered deprecated and its lifecycle responsibilities are shifting toward Fleet Management.\nFrom there, lifecycle control moves into Fleet Management. Each deployment becomes a VCF Instance under a Fleet, and operational visibility consolidates into VCF Operations.\nVCF Automation capabilities are embedded within VCF Operations. VCF Operations for Logs and VCF Operations for Networks are introduced as Day 2 components rather than being lifecycle managed through an external Aria Suite Lifecycle Manager. The platform is intentionally more unified and far less dependent on external lifecycle tooling or AVN constructs just to achieve integration.\nThat consolidation sounds simple. It is not.\nWhat Actually Changes in VCF 9 # The number of required IP addresses changes. Appliance roles change. VIP usage patterns shift. Identity handling becomes more centralized. Certificate trust requirements become stricter. Reverse proxy and service registry behavior in VCF 9 assume clean FQDN alignment and strong DNS hygiene.\nIn 5.2, Aria and VCF could be loosely coupled or tightly coupled depending on design decisions. In 9, the platform assumes a more unified operational model built around Fleet Management and VCF Operations.\nIf you attempt an in place upgrade while carrying forward every appliance, every AVN decision, every DNS shortcut, and every IP pool from 5.2, complexity multiplies quickly.\nIP pools must be recalculated. VIP assignments must be validated. Certificate chains must align across management and workload domains. NAT configurations that may have functioned previously can cause instability in 9 because control plane services expect consistent, routable FQDN resolution.\nNow multiply that by federal compliance requirements, STIG constraints, and strict change control.\nThat is when the greenfield conversation becomes serious.\nWhy Greenfield Made More Sense # A clean VCF 9 deployment lets you reset the parts that usually cause friction later.\nIt allows you to recalculate IP planning from scratch, align DNS records properly from day one, remove legacy AVN constructs, standardize certificate trust early, intentionally separate TEP pools across domains, and avoid carrying forward deprecated lifecycle components.\nModernization projects rarely fail because the new platform cannot handle the workload.\nThey struggle because historical design decisions follow you forward.\nGreenfield does not mean starting over recklessly.\nIt means deciding not to inherit technical debt.\nPlanning Matters More Than Bring Up # One of the most valuable tools during this process was VMware’s official VCF 9 planning workbook. It forces you to think through management domain IP allocations, workload domain separation, NSX uplink design, DNS forward and reverse validation, certificate planning, and depot separation between infrastructure components and product components under Fleet Management.\nPlanning Resource # For anyone preparing for a 5.x to 9 transition, here is the official VCF 9 planning workbook:\n👉 Download the VCF 9 Planning Workbook\nThe bring up wizard is the easy part.\nThe architecture decisions are where the real work happens.\n","date":"February 26, 2026","externalUrl":null,"permalink":"/vcf/from-vcf-5.2-to-vcf-9-what-modernization-actually-looks-like/","section":"VMware Cloud Foundation","summary":"When customers talk about upgrading VMware Cloud Foundation, it sounds simple.\nJust upgrade to the latest version.\nIn reality, moving from VCF 5.2 to VCF 9 is not an upgrade. It is an architectural transition.\nOver the past several months, I’ve had the opportunity to support that transition in a federal environment. Here are the biggest lessons.\nThe Greenfield Question # When I stepped into this engagement, the environment was running VCF 5.2.\n","title":"From VCF 5.2 to VCF 9: What Modernization Actually Looks Like","type":"vcf"},{"content":" Hi, I’m Devyn Harrington. # I’m a Senior VMware Cloud Foundation Consultant at ClearBridge Technology Group, a VMware by Broadcom partner supporting federal and Department of Defense environments.\nAfter nearly a decade in the United States Marine Corps as a Data Systems Chief, I transitioned into enterprise virtualization and cloud infrastructure consulting. Today, I specialize in VMware Cloud Foundation, NSX, and secure architecture modernization initiatives.\nVCF Modernization NSX \u0026 Network Design Federal \u0026 DoD Environments Security-First Architecture Education \u0026amp; Credentials # I hold a Bachelor’s in Technology Management, an MBA, and a Master of Science in Cybersecurity from Excelsior University.\nRead My VCF Field Notes View Certifications on Credly ","externalUrl":null,"permalink":"/about/","section":"","summary":" Hi, I’m Devyn Harrington. # I’m a Senior VMware Cloud Foundation Consultant at ClearBridge Technology Group, a VMware by Broadcom partner supporting federal and Department of Defense environments.\nAfter nearly a decade in the United States Marine Corps as a Data Systems Chief, I transitioned into enterprise virtualization and cloud infrastructure consulting. Today, I specialize in VMware Cloud Foundation, NSX, and secure architecture modernization initiatives.\n","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/academics/","section":"Academics","summary":"","title":"Academics","type":"academics"},{"content":"","externalUrl":null,"permalink":"/journey/","section":"Journey","summary":"","title":"Journey","type":"journey"}]