Designing Your Private Cloud with VMware vCloud Director


In this blog post I’ll try to cover a new aspect of the vCloud Director for a change into the design journey of a private cloud and how all its pieces come together. It’s going to be very hard to cover every detail in this post, so I will try to focus on the main areas and group them into small sections for easier follow up.

A Holistic Diagram

When it comes to an architecture design, you must have a diagram! In this one I’m mixing between logical and physical layouts. The former being around storage, and the latter will be focused on servers. The reason why I didn’t go with a complete physical design was to reduce the complexity of the diagram. I know that i should be vendor neutral here, but i thought that i need to be as much practical and realistic as possible (specifically in this design subject) for you to follow up easily. Choosing Cisco UCS was just a random selection. I happened to show IBM some love in my previous vSphere design posts, so i thought i would do the same for Cisco this time. Enough said?

vmware1

Cluster Design

If you have already read my previous blog posts on designing vSphere on blades, this part won’t look very different to you. We will follow here nearly the same concept and divide our clusters into one Management cluster and two resource clusters. Let’s take a closer look:

vmware2

  • Management Pod: We will run here all our management and monitoring elements of our private cloud. The cluster consists of 4 blades spanned across two chassis for redundancy.
  • Resource Pod: We have here two clusters forming our resources:
    • Gold Cluster: We are dedicating here one cluster for this hardware pool. In this cluster we have 12 blades spanned across three chassis with a maximum of 4 blades per each chassis. This will allow us to achieve the maximum high-availability, and also avoid having all our 5 primary HA nodes on a single chassis in case of any unlikely hardware failure.
    • Silver & Bronze Cluster: We are having here one cluster on this hardware pool, but we will slice it down into two resource pools. Each RP can be configured as per each customers’ requirement. We will name these two clusters “Silver” and “Bronze”.

The Management Pod

As mentioned earlier, in this cluster we will run all our management and monitoring elements of the cloud. Here are a list of the VMs:

vmware3

  • vCD Cells: It is highly recommended to use at least two cells in your cloud environment to avoid the single point of failures. It’s worth mentioning also that there is no licenses restrictions as to how many cells you have to run.
  • Load Balancer: This is typically a physical box sitting outside your virtual/cloud environment (like F5 appliances) but it could be a virtual appliance as well. At the time of writing these lines, there is no special recommendations for the LBs to be used with vCD. In fact, a simple DNS round robin could do the trick to distribute the load across your cells.
  • vSM: This is the vShield Manager virtual appliance. I would recommend here to run this VM in FT. From one hand, it will run perfectly well with 1vCPU and it won’t consume any cpu resources from your cluster, and from the other hand you will ensure that you have the required redundancy for this important cloud element.
  • The Oracle DB: This is the backend database for our entire private cloud. It’s vital that you have a highly redundant setup for the Oracle database like a RAC setup for example. Your DB admins are typically the ones that will be concerned about this component and they should be the ones designing it. You have to know however what are the databases that will be hosted in this cluster, which can be summarized as follows:
    • vCenter Server DB: as per our existing setup, we have to vCenter Servers, consequently we will have two vC databases.
    • vCenter UpdateManager DB: same thing holds true here. We will need to vCenter UM databases.
    • vCloud Director DB: this is the most critical part of the cloud environment, not just because we store all out cloud related information there, but also for the fact that the DB plays a major role in the cluster heart beating of the Cells (more on this subject in a future blog post).
    • vCenter Chargeback: this is the DB that handles all our billing.
  • vCenter Servers: As you see in the diagram above, we have two vCenter servers. The first one will be managing the management cluster itself, and the second one will be managing the resource clusters. I haven’t included in the diagram the vCenter Heartbeat product although it would be highly recommended here. You can still leverage the native VMware High-Availability to protect your vCenter VMs, and also enable the VM monitoring to reboot the VM in case of OS crashes. It all boils down to what level of protection and HA you want to ensure in your cloud.
  • vCenter Chargeback Servers: Here we have the Chargeback servers. I will talk in details about this subject in a complete blog post very soon.
  • NFS Share: You will need the NFS share to store the response files for the cells and also it would be a good practice to store your vCD bits along with the other required software for the cells. Eg. RPMs and dependencies.

Networking

We will use here three different types of virtual switches. The first one will be the vNetwork Standard Switch (vSS) and it will be dedicated to the vSphere environment. I personally prefer to use the vSS with MGMT, vMotion & FT networks to avoid the dependencies on vCenter Server. I’ve talked about this point previously in my vSphere design blog posts so you can have a look on them for more info.

The vSphere Networking

vmware4

As you see in the diagram above, we are using here a typical design for our vNetwork Standard Switch. An Active/Passive configuration for all the three networks, the SC/MGT, vMotion and FT. Each network is active on a dedicated NIC, and standby on another one.

The Cloud Networking

We will use in our cloud networking the vNetwork Distributed Switch (vDS). You can still use the vSS (or Nexus 1000V) in your design here, but please note that this will limit your functionality. With vSS and N1KV you won’t be able to use both the vCloud Network Isolation and VLAN-baked Network Pools. This might be bearable in small to midsized private clouds where your network provisioning will grow slowly and gradually, but in case of large enterprises or service providers (public cloud space) the vDS should be the way to go in order to support the fully automated network provisioning.

vmware5

In the diagram above, we have here one dedicated vDS for our network pools. All what you need to do is to create one vDS in vSphere and allocate it to your Network Pool inside the vCloud Director resources. Everything will be provisioned automatically by vDC later on whenever you create a new Organization Network and/or vApp Network. You will keep seeing these cryptic dvs.VC### port groups added on the fly as you provision more and more of your cloud networks. Obviously, you will never have to worry about them from a vSphere perspective as all the administration will always be done from your vCD portal.

vmware6

In the diagram above, we have another vDS for our External Networks. These networks have to be pre-provisioned by the vSphere admin before using/consuming them in vDC. An example for that would be an Internet connection for your organizations. You basically create a new network in vSphere and hook it up with your internet backbone. Another example would be a network hooked up with a VPN tunnel to a different branch/company/partner.

Please note that you can still use one vDS for both the Network Pools and External Networks, but I would recommend separating them for easier management.

Provider Virtual Datacenter

vmware7

Our Provider vDCs design will be fairly easy. I’m using two examples here to show you how you can map your vSphere clusters to vCloud Director. In the first example, the Gold vDC, we have a 1:1 mapping with the “Gold Cluster” that we created earlier in vSphere. All the resources of this cluster will be “dedicated” to that Gold Provider vDC. In the second example we have one cluster (which contains 4 high-end blades) sliced down into two resource pools. We are then carving out of these two RPs another two Provider vDCs. We will call the first one a “Silver” PvDC, and the second one “Bronze” PvDC.

Although the general recommendation of the server virtual environments is to use the scale out approach (illustrated in the Gold Cluster), there are some customers I’ve seen who prefer the scale-up approach and use high-end monster servers for that. I’m not going to go into the religious war between the two options, it all comes down to customer requirements and design choices. I just need to point out that the resource pools will make much sense in the case of high-end servers because they normally have large compute power (CPU & MEM) that you would need to utilize more efficiently.

Organization Virtual Datacenters

vmware8

Now that we’ve explained the simple creation of Provider vDCs, it’s time to have a look into the Organization vDCs. The concept here is somehow similar, however, we have what we call “Allocation Models” when we carve out the resources from the PvDCs to Org-vDCs. Let’s have a closer look.

In the “Gold Org vDC” we are using the “Reservation Pool” to reserve/dedicate the compute resources down from the Gold Provider vDC. I’m showing here another 1:1 mapping although you can obviously use the same Gold vDC to allocate down more resources to another Org-vDC. In this example you can think of a very high SLA requirement and security to one of your BUs or departments where you need dedicate all the resources to it. In my case here all these resources are consumed down by the IT Operations department (or “Organization” as we will call it later).

In the second example, the Silver Org-vDC, we are using a second allocation model – the “Allocation Pool”. In this allocation model we are committing only a “percentage” of the resources available in the Provider vDC. Same thing is happening in the second “Staging” Org-vDC. We are committing another percentage of the Silver PvDC and so forth.

In the third and last example, we are using the “Pay-as-you-go” allocation model. The resources are committed here only when you create the vApps. It makes much sense to use it in case of development environment where vApps are created and used for a period of time without much SLA to guarantee.

Organizations

vmware9

Last but not least in our cloud environments we have the Organizations. It’s a bit hard to explain this part as it’s quite variant how enterprises do their internal IT (remember, we are exploring here the private cloud). With that said, I’ll use a very simple scenario to keep things as much clear and less complex as possible.

We are having here two Organizations, the IT Ops and IT Dev. The former will act as our operations department that is responsible about the various services in IT, and the latter will be the one providing the internal development in terms of vApps. For various and detailed networking scenarios of the vApp and Organization networks and how they all interact together and relate down to the vSphere networking, you can checkout my diagram here.

There is one part probably missing in the diagram which is the Catalogs. Massimo Re Ferre’ has written a beautiful article on this subject you should check it out for thorough understanding about Catalogs. In our case here we can have two catalogs published by each Org. The first one will be from the IT Ops for the general purpose vApps/ISOs deployed by the whole enterprise, and the second one will be from the IT Dev for the specific internal Apps development (e.g. a SharePoint Intranet/Website or a custom HR application..etc).

Conclusion

In thispart blog post we went through a simple design for a Private Cloud using VMware vCloud Director. It is impossible to show all the capabilities or the design scenarios for this incredibly powerful product, but hopefully you had an idea on the basic building blocks for starting your own deployment. If there is anything major you think that I missed in this article, please drop a comment or an email and I will cover it in future posts.

Source – http://www.hypervizor.com/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s