Tag Archives: VCAP-DCD

Enable VMware EVC by default on your new vSphere Clusters

If you don’t have a compelling reason to not enable EVC on a cluster, then I don’t see why you wouldn’t enable it. EVC allows you to future guard your cluster so that it allows you to introduce different CPU generations on a cluster.

There has been no evidence that EVC negatively impacts performance. But like with anything you should trust, but verify the application workload has the acceptable level of performance that is required.

There are impacts as to whether you have enabled EVC on an already existing vSphere cluster with running VMs. There is no outage if you do decide to turn on EVC on an existing cluster, but running VMs will not be able to take advantage of new CPU features until the VM is powered off and powered back on again. The opposite will occur if you decide to lower the EVC on an already EVC enabled cluster so the VM will be running at a higher EVC mode than the one that was explicitly lowered to until the VM is powered off and powered back on.

Enable_EVC

References:

My VCAP-DCD Experience

I took the VCAP-DCD exam today and have finally passed. I believe this is one of the unwritten rule that you have to write about your experience so you can share it. I felt like I just tackled a mountain. This was probably one of the hardest exams that I’ve taken in a long time. This was my second attempt at this beast as my first attempt was at VMworld 2014. I’ve been studying off and on for some time now, but after reading some of the other blogs about guys passing this exam on multiple tries I decided what better time than now to take this challenge on. The blog I read was Craig Waiters who passed it on his third attempt; read his journey here.

I did find the 5.5 version has a lot better flow, then the 5.1 exam. I had 42 questions and 8 were the visio-like design questions. After being unsuccessful the first time, I decided I needed to brush up on high level holistic helicopter view of design. I also did some brushing up on PMP skills to help define requirements, assumptions, contraints, and risks.

In preparation for my take two of this exam I used this:

Tips:

  • I skipped all the questions by flagging them which allowed me to concentrate on the design scenarios while I was still fresh. This left me over an hour and a half left to answer the rest of the questions.
  • Know how to apply the infrastructure design qualities of AMPRS: Availability, Manageability, Performance, Recoverability, and Security (Cost can also be thrown in there)
  • Requirements, Assumptions, Constraints, and Risks; how to identify these.
  • Applying dependencies; upstream and downstream; Good blog by vBikerblog on this Application Dependency – Upstream and Downstream Definitions
  • Conceptual, Logical, and Physical designs.
  • Storage design best practices; (ALUA, VAAI, VASA, SIOC)

What’s next:

  • I’ll be taking the Install,Configure, and Manage (ICM) course for NSX and likely taking the VCP-NV
  • Maybe next year I’ll tackle the behemoth of the VCDX; need to finish grad school first

Resource pools, child pools, and who gets what…

I know there have been numerous blog posting on this topic already which are all very good, but this is mainly for my own understanding as I try to understand this topic a bit further.

So what are resource pools?

They are a logical abstraction for management so you can delegate control over the resources of a host (or a cluster). They can be grouped into hierarchies and used to progressively segment out accessible CPU and memory resources.

Resource pools use a construct called shares to deal with possible contention (noisy neighbor syndrome)

Setting CPU share values Memory share values
High 2000 shares per vCPU 20 shares per megabyte of configured VM memory
Normal 1000 shares per vCPU 10 shares per megabyte of configure VM memory
Low 500 shares per vCPU 5 shares per megabyte of configured VM memory

*Taken from the 5.5 vSphere Resource Management document

My biggest issue with even getting into any contention issues is why would you architect a design that has the possibility to have contention if you can avoid it. I understand that a good design takes into account for the worst case scenario in which all VMs would be running at 100% of provisioned resources, therefore possibly generating an over-commitment issue with potential contention but that’s when you go to the business to request more funds for more resources. I know it’s the one of the benefits of virtualization that you can do this, but should you rely on it?.. It’s also a comfort level of over allocation in which the level depends on the individual architect. Keep to the rule of protecting the workload and I don’t see an issue with this. I keep the worst case scenario in mind, but I don’t ever want to get there, but this is more of a rant than anything else.

Frank Denneman(@FrankDenneman), Rawlinson Rivera(@PunchingClouds), Duncan Epping(@DuncanYB) and Chris Wahl(@ChrisWahl) have already set us straight on the impacts of using resource pools incorrectly and the best ways to most efficiently use RPs.

The main takeaways of resource pools are:

  • Don’t use them as folders for VMs
  • Don’t have VMs on the same level as resource pools without understanding the impacts
  • If VMs are added to the resource pools after the shares have been set, then you’ll need to update those share values
  • Don’t go over eight (8) levels deep [RP limitation; and why would you]

Additional Resources:
Chris Wahl: Understanding Resource Pools in VMware vSphere

Frank Denneman and Rawlinson Rivera: VMworld 2012 Session VSP1683: VMware vSphere Cluster Resource Pools Best Practices

Duncan Epping: Shares set on Resource Pools