Friday, October 7, 2011

vSphere 5 Host Network Design - 12 NICs Segmented Networks & Highly Resilient

This design is highly resilient to most forms of failure. Performance throughput will also be very high.

I had a lot of fun building this diagram and it is probably one of my favourites. Keeping in mind that once you get to this level of implementation you should probably be seriously considering moving to a 10GB infrastructure in the not to distant future.

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

It is assumed that each host has four inbuilt NICs and 2 x quad port PCI-X cards.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking need to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.


This configuration will allow up to 14 vSphere hosts and 4 storage array's across 4 x 48 port stacked physical switches. The switch interconnects would need to be high speed uplinks not utilizing Ethernet ports.

If instead an isolated storage network was utilized then it would be possible to have up to 24 hosts connected across the switches, this is a substantial increase in server density.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to be mis-configured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interfere with the Management network.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 12 NIC SegmentedNetworks Highly Resilient Design v1.1.jpg



Comments, suggestions and feedback all welcome.

I can be contacted via email for the original visio document;
logiboy123 at gmail dot com

Thursday, October 6, 2011

vSphere 5 Host Network Design - 10 NICs Segmented Networks

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking needs to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.

This configuration will allow up to 8 vSphere hosts and a single storage array across 2 x 48 port stacked physical switches as long as the switch interconnects are not using Ethernet ports.

An isolated storage network is still my preferred option in almost every environment. Isolating storage to a different physical network would allow up to 12 hosts to be connected across 2 x 48 port switches.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to misconfigured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interferer with the Management network.

* 3 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NIC SegmentedNetworks Design v1.0.jpg


Comments, feedback and suggestions all welcome.

Wednesday, October 5, 2011

vSphere 5 Host Network Design - 10 NICs Isolated Storage & Isolated DMZ Including FT

This design should be reasonably simple to implement and maintain, has isolation for storage and the DMZ and excellent throughput for Fault Tolerance traffic.

For this design the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NICs IsolatedStorage and IsolatedDMZ Design v1.0.jpg



Comments, feedback and suggestions welcome.

Tuesday, October 4, 2011

vSphere 5 Host Network Design - 8 NICs Isolated Storage Including FT

The following article contains two designs incorporating isolated storage networking.

For both designs the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In both designs trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Simple Design
The first design is slightly more simple to implement and maintain, further it has a greater throughput for Fault Tolerance traffic.

If your organization is comfortable with segmentation of traffic for a DMZ then you can add this functionality by simply adding the DMZ VLANs to dvSwitch1. However in this instance I would not recommend this unless you are utilizing distributed virtual switches.

If DMZ implementation requires isolated traffic then a 10 NIC design would be required, where the extra two NICs are assigned to another dvSwitch with uplinks going to the physical DMZ switches.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Simple Design v1.0.jpg



Complex Design
The second design is much more complex and requires the use of distributed virtual switching for the management network, this is not recommended if you have vCenter running on a dvSwitch that it manages. vCenter should not reside on a dvSwitch that it manages because this creates a circular dependency and is dangerous under certain circumstances. This is not the recommended approach for any design.

If however you have a management cluster or vCenter as a physical machine then this design is possibly a good solution for you depending on your requirements. Specifically if you use very large nodes with a high consolidation ratio then this design enables you to migrate VMs off a host extremely quickly as throughput for vMotion events is a primary focus of this design.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design is not for you if you do not have this level of licensing. The dvSwitch is required because the dvSwitch0 configuration is to complex to be built using standard virtual switches that must then be reproduced and maintained across multiple hosts. Management and FT traffic are configured to not use each others primary uplink ports. This ensures true separation of traffic that would have adverse effects on each other.



* 2 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Complex Design v1.0.jpg



Comments and feedback welcome.

Monday, October 3, 2011

vSphere 5 Host Network Design - 6 NICs Segmented Networks & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of logically isolated networking and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

The host design presumes that vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are using an expansion card. The Management and Storage switches have an uplink from both an internal and expansion slot based card. In the event of the internal card failing the host and VM's will continue to function, in the event of the expansion card failing all VMs will lose network connectivity, but will retain host and storage connectivity.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging is absolutely required in this design in order to apply a secure segmentation of storage traffic from the rest of the environment.

* 4 - VLAN 30 is reserved in case Fault Tolerance is ever required in the design, hence VLAN 40 is used for storage.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.

 vSphere 5 - 6 NIC SegmentedStorage and NoFT Design v1.1.jpg



I welcome any questions or feedback. If you would like the original Visio document please contact me and I will email these to you.

Sunday, October 2, 2011

vSphere 5 Host Network Design - 6 NICs Isolated Storage & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of physically isolated networking for storage and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

It is assumed that each host has a dual port internal NIC and quad port expansion card. So vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are PCI-X.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - VLAN 30 is reserved in the event of Fault Tolerance being added to the design at a latter stage. Always reserve VLANs that may be used in the future as this will save you time and effort later on.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.


vSphere 5 - 6 NIC IsolatedStorage & NoFT Design v1.0.jpg


I welcome any questions or feedback. If you would like the original Visio document then you can contact me and I can email these to you.

Saturday, October 1, 2011

VMware vSphere 5 Host Network Designs

*** Updated 30/04/2012 ***

I have now uploaded two 10GbE designs. I might create some more 10GbE designs if they are requested. Certainly I can see myself creating some 2 x NIC 10GbE designs, but essentially they would be the same as the two already uploaded. If you have some unique design constraints please contact me on logiboy123 at gmail dot com.

The following is an anchor page for my vSphere 5 host networking diagrams. The included diagrams are based on 1GB and 10GbE network infrastructure.

Each of the following links will represent slight variations on the same type of design where the goals are:
  • Manageability - Easy to deploy, administer, maintain and upgrade.
  • Usability - Highly available, scalable and built for performance.
  • Security - Minimizes risk and is easy to secure.
  • Cost - Solutions are good enough to meet requirements and fit within budgets.

The following base designs should be considered a starting point for your particular design requirements. Feel free to use, modify, edit and copy any of the designs. If you have a particular scenario you would like created please contact me and I will see if I can help you with it.

All designs are based on iSCSI storage networking. For fibre networks simply remove or convert the Ethernet switches with the relevant fibre switch; segmentation designs will not apply so use the isolated designs for your base.


1GB NIC Designs

6 NICs isolated storage & no Fault Tolerance

6 NICs segmented storage & no Fault Tolerance

8 NICs isolated storage including Fault Tolerance

10 NICs isolated storage & isolated DMZ including Fault Tolerance

10 NICs segmented networks including DMZ & Fault Tolerance

12 NICs segmented networks including DMZ & Fault Tolerance - Highly Resilient Design


10GbE NIC Designs

4 NICs 10GbE segmented networking vSS Design

4 NICs 10GbE segmented networking vDS Design