Friday, October 7, 2011

vSphere 5 Host Network Design - 12 NICs Segmented Networks & Highly Resilient

This design is highly resilient to most forms of failure. Performance throughput will also be very high.

I had a lot of fun building this diagram and it is probably one of my favourites. Keeping in mind that once you get to this level of implementation you should probably be seriously considering moving to a 10GB infrastructure in the not to distant future.

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

It is assumed that each host has four inbuilt NICs and 2 x quad port PCI-X cards.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking need to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.


This configuration will allow up to 14 vSphere hosts and 4 storage array's across 4 x 48 port stacked physical switches. The switch interconnects would need to be high speed uplinks not utilizing Ethernet ports.

If instead an isolated storage network was utilized then it would be possible to have up to 24 hosts connected across the switches, this is a substantial increase in server density.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to be mis-configured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interfere with the Management network.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 12 NIC SegmentedNetworks Highly Resilient Design v1.1.jpg



Comments, suggestions and feedback all welcome.

I can be contacted via email for the original visio document;
logiboy123 at gmail dot com

Thursday, October 6, 2011

vSphere 5 Host Network Design - 10 NICs Segmented Networks

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking needs to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.

This configuration will allow up to 8 vSphere hosts and a single storage array across 2 x 48 port stacked physical switches as long as the switch interconnects are not using Ethernet ports.

An isolated storage network is still my preferred option in almost every environment. Isolating storage to a different physical network would allow up to 12 hosts to be connected across 2 x 48 port switches.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to misconfigured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interferer with the Management network.

* 3 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NIC SegmentedNetworks Design v1.0.jpg


Comments, feedback and suggestions all welcome.

Wednesday, October 5, 2011

vSphere 5 Host Network Design - 10 NICs Isolated Storage & Isolated DMZ Including FT

This design should be reasonably simple to implement and maintain, has isolation for storage and the DMZ and excellent throughput for Fault Tolerance traffic.

For this design the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NICs IsolatedStorage and IsolatedDMZ Design v1.0.jpg



Comments, feedback and suggestions welcome.

Tuesday, October 4, 2011

vSphere 5 Host Network Design - 8 NICs Isolated Storage Including FT

The following article contains two designs incorporating isolated storage networking.

For both designs the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In both designs trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Simple Design
The first design is slightly more simple to implement and maintain, further it has a greater throughput for Fault Tolerance traffic.

If your organization is comfortable with segmentation of traffic for a DMZ then you can add this functionality by simply adding the DMZ VLANs to dvSwitch1. However in this instance I would not recommend this unless you are utilizing distributed virtual switches.

If DMZ implementation requires isolated traffic then a 10 NIC design would be required, where the extra two NICs are assigned to another dvSwitch with uplinks going to the physical DMZ switches.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Simple Design v1.0.jpg



Complex Design
The second design is much more complex and requires the use of distributed virtual switching for the management network, this is not recommended if you have vCenter running on a dvSwitch that it manages. vCenter should not reside on a dvSwitch that it manages because this creates a circular dependency and is dangerous under certain circumstances. This is not the recommended approach for any design.

If however you have a management cluster or vCenter as a physical machine then this design is possibly a good solution for you depending on your requirements. Specifically if you use very large nodes with a high consolidation ratio then this design enables you to migrate VMs off a host extremely quickly as throughput for vMotion events is a primary focus of this design.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design is not for you if you do not have this level of licensing. The dvSwitch is required because the dvSwitch0 configuration is to complex to be built using standard virtual switches that must then be reproduced and maintained across multiple hosts. Management and FT traffic are configured to not use each others primary uplink ports. This ensures true separation of traffic that would have adverse effects on each other.



* 2 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Complex Design v1.0.jpg



Comments and feedback welcome.

Monday, October 3, 2011

vSphere 5 Host Network Design - 6 NICs Segmented Networks & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of logically isolated networking and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

The host design presumes that vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are using an expansion card. The Management and Storage switches have an uplink from both an internal and expansion slot based card. In the event of the internal card failing the host and VM's will continue to function, in the event of the expansion card failing all VMs will lose network connectivity, but will retain host and storage connectivity.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging is absolutely required in this design in order to apply a secure segmentation of storage traffic from the rest of the environment.

* 4 - VLAN 30 is reserved in case Fault Tolerance is ever required in the design, hence VLAN 40 is used for storage.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.

 vSphere 5 - 6 NIC SegmentedStorage and NoFT Design v1.1.jpg



I welcome any questions or feedback. If you would like the original Visio document please contact me and I will email these to you.

Sunday, October 2, 2011

vSphere 5 Host Network Design - 6 NICs Isolated Storage & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of physically isolated networking for storage and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

It is assumed that each host has a dual port internal NIC and quad port expansion card. So vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are PCI-X.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - VLAN 30 is reserved in the event of Fault Tolerance being added to the design at a latter stage. Always reserve VLANs that may be used in the future as this will save you time and effort later on.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.


vSphere 5 - 6 NIC IsolatedStorage & NoFT Design v1.0.jpg


I welcome any questions or feedback. If you would like the original Visio document then you can contact me and I can email these to you.

Saturday, October 1, 2011

VMware vSphere 5 Host Network Designs

*** Updated 30/04/2012 ***

I have now uploaded two 10GbE designs. I might create some more 10GbE designs if they are requested. Certainly I can see myself creating some 2 x NIC 10GbE designs, but essentially they would be the same as the two already uploaded. If you have some unique design constraints please contact me on logiboy123 at gmail dot com.

The following is an anchor page for my vSphere 5 host networking diagrams. The included diagrams are based on 1GB and 10GbE network infrastructure.

Each of the following links will represent slight variations on the same type of design where the goals are:
  • Manageability - Easy to deploy, administer, maintain and upgrade.
  • Usability - Highly available, scalable and built for performance.
  • Security - Minimizes risk and is easy to secure.
  • Cost - Solutions are good enough to meet requirements and fit within budgets.

The following base designs should be considered a starting point for your particular design requirements. Feel free to use, modify, edit and copy any of the designs. If you have a particular scenario you would like created please contact me and I will see if I can help you with it.

All designs are based on iSCSI storage networking. For fibre networks simply remove or convert the Ethernet switches with the relevant fibre switch; segmentation designs will not apply so use the isolated designs for your base.


1GB NIC Designs

6 NICs isolated storage & no Fault Tolerance

6 NICs segmented storage & no Fault Tolerance

8 NICs isolated storage including Fault Tolerance

10 NICs isolated storage & isolated DMZ including Fault Tolerance

10 NICs segmented networks including DMZ & Fault Tolerance

12 NICs segmented networks including DMZ & Fault Tolerance - Highly Resilient Design


10GbE NIC Designs

4 NICs 10GbE segmented networking vSS Design

4 NICs 10GbE segmented networking vDS Design

Wednesday, August 3, 2011

VMware vSeminar Series 2011 - Wellington

On Tuesday the 2nd of August I attended my first VMware full day event. The VMware vSeminar Series 2011 in Wellington, New Zealand.

There were some great speakers at the event and they just covered some of the biggest improvements to vSphere in version 5.0 as well as other major products; View, vCloud, SRM etc.

The following are just a a couple of photos from the event.

This panel represents the major hardware vendors that sponsored the event. At some points it looked like blows might be exchanged, but this was narrowly avoided. Next time it would be good if they wore sumo wrestling outfits for our amusement.
From left to right; HP, IBM, EMC, Dell and NetApp.



Samsung were displaying some pretty interesting Thin client solutions embedded in their monitors. I'm not sure if this is the best approach, but they certainly looked cool.



Mitel were showcasing their phones. What interested me the most was the concept of the software based phone in conjunction with a View client using a Thin desktop. Apparently this would give you a complete and fully featured desktop and phone solution minus all the mess.



Riverbed were showcasing their acceleration products for WAN connections. Getting the theme yet? I'm very interested in pursuing a VMware vCenter and View based career. This will capitalise on my imaging, builds application packaging experience. Besides which I actually think thin computing is really cool. The driver should always be that it makes the IT shop cheaper, easier to use, easier to manage. If a solution isn't all three of those, then why bother implementing it? Well security of course, hello banks and other high security environments!



This is Nick Evans from Gen-i. Nick is a bit of a guru and just recently passed his VCAP-DCA. If you are a Gen-i customer, you should just say "I want Nick to do all the work thanks".



And finally here is Thomas Curtis. A guru from Fujitsu. If you are a Fuji customer just ask for Thomas!


Well that's my lame coverage of the vSeminar complete. I'm sure I'll get better at this stuff going forward. Sorry to those vendors that I forgot to take photos of.

Tuesday, July 19, 2011

Find VM snapshots & delete VM snapshots using PowerCLI

The following will show all VM's in the connected vCenter environment that have snapshots and will output the information showing the VM name, snapshot name and snapshot creation date.

Get-VM | Get-Snapshot | Select VM, Name, Created


The following will remove all snapshots with an associated VM.

Get-VM "ServerName" | Get-Snapshot | Remove-Snapshot -confirm:$false -RunAsync


This is just for my own personal reference really when I want to quickly check the environment I'm working in.

A fully featured and robost script can be run in the environment using virtu-al.net's vCheck script. I recommend checking it out as it is pretty amazing;

http://www.virtu-al.net/featured-scripts/vcheck/

Thursday, July 14, 2011

Configure VMware vMA as an ESXi Syslog Server


VMware vMA as a Syslog Server 

The following is a very detailed and specific walk through on setting up a vMA Syslog server for ESXi hosts. This walk through was resourced by excellent blogs which are in the weblinks section at the end of this document.


Version Information

This guide is relevant for vSphere 4.1 and vMA 4.1. Whilst other versions may be similar there will be minor discrepancies in the implementation not covered here.


Log Collection vs Log Receiving

In this implementation the vMA will be configured to collect the logs from ESXi hosts as opposed to receiving them. Essentially this means all work and configuration is done on the vMA and no changes are required on the hosts. ESXi will still have logs stored locally, but the vMA vilogger utility will copy them to its local store.


Source Media

The installation media required can be found at;
Gnome Partition Editor - http://gparted.sourceforge.net/
I'm using GParted mostly for those users that are less familiar with the command line.


Document Outline

  • Deploy a vMA server
  • Configure Networking
  • Configure Time
  • Add CDROM & Storage
  • Configure & Present Storage
  • Configure vilogger
  • Add Target Servers

Deploy a vMA Server

1.    Use a vSphere Client to connect to a system that is running ESX/ESXi 4.1, ESX/ESXi 4.0, ESX/ESXi 3.5 Update 2 or later, or vCenter Server 4.0.

2.    If connected to a vCenter Server system, select the host to which you want to deploy vMA in the inventory pane.

3.    Select File > Deploy OVF Template.

4.    The Deploy OVF Template wizard appears. Select Deploy from file if you have already downloaded and unzipped the vMA virtual appliance package.

5.    Click Browse, select the OVF, and click Next.

6.    Click Next when the download details are displayed.

7.    Accept the license agreement. (Optional) Specify a name for the virtual machine.

8.    Select a location for the virtual machine when prompted. If you are connected to a vCenter Server system, you can select a folder.

9.    If connected to a vCenter Server system, select the resource pool for the virtual machine. By default, the toplevel root resource pool is selected.

10. If prompted, select the datastore to store the virtual machine on and click Next.

11. Select the network mapping and click Next.

12. Review the information and click Finish.


Configure Networking

1.    Power on the newly created server and open a console.

2.    Specify the IP address, default gateway and DNS information.

3.    Specify a hostname for the vMA.

4.    Specify a password for the vi-admin account. This account has root access.


Configure Time

ESXi uses UTC for internal time stamping. In order to avoid timestamp issues the vMA should be set to UTC for time keeping.

To configure UTC following options should be implemented:
1.    Remove the localtime file:
sudo rm /etc/localtime

2.    Create a symbolic link to the UTC timezone:
sudo ln –s /usr/share/zoneinfo/UTC /etc/localtime

3.    Edit the NTP configuration file:
sudo nano /etc/ntp.conf
Find the section # Use public servers from the pool.ntp.org project.
Replace the current entries with your preferred NTP servers

4.    Configure the NTP daemon to start on reboot:
sudo /sbin/chkconfig ntpd on

5.    Restart the NTP daemon:
sudo /sbin/service ntpd restart

6.    Confirm the NTP server connections are up:
sudo ntpq -np


Add CDROM & Storage
A CDROM is required to use the Gnome Partition Editor and the extra storage will be configured to hold the logs for the vMA.

Edit the vMA server with the following settings.
1.    Add a CD-ROM drive.


2.    Add a second hard disk, size it appropriately for your server fleet. A very rough estimate of the amount of log information captured would be 500MB per host, per day.


3.    In the Boot Options menu specify that the VM should boot into the BIOS and configure the CDROM as the primary boot device.


Configure & Present Storage

The following step will configure the extra storage presented to the vMA as another ext3 partition for use as the Syslog data store.
1.    Start the server and attach the GParted-live ISO to the CDROM drive. Select the default settings option when prompted.

         
2.    Select the option Don’t touch keymap and click OK.


3.    Select the option 33 and push Enter.


4.    Select the option 0 and push Enter or choose your language if not English.


5.    The following screen shows the GParted utility interface.


6.    Select the /dev/sdb hard disk to edit the configuration.


7.    Right click on the unallocated space and select New to create a new partition.


8.    Assign the partition the Label /syslog and select the File system ext3.


9.    Review the configuration and click Apply.


10. Click the Apply button to confirm configuration changes.


11. Once configuration changes have applied click Close and then reboot the server.

Now the newly partitioned disk will need to be assigned a mount point within the OS. The following steps are required to achieve this.

12. Log into the vMA using the vi-admin account.

13. Edit the /etc/fstab file:
sudo nano /etc/fstab

14. Add the following line to the bottom of the file:
/dev/sdb1       /syslog        ext3      defaults        1 2

Use a single tab as a seperator for each entry and you will notice that the words will be out of alignment with the rest of the file, this is not a problem.
For details on the fstab file go to http://en.wikipedia.org/wiki/Fstab which will explain what the file is, how it works and the specifics of each line entry.

15. Press Ctrl+X and then Y to save and close the file.

16. Make a new directory to contain the syslog data:
sudo mkdir /syslog

17. Change the owner of the new directory: 
sudo chown vi-admin:root /syslog

18. Mount everything in /etc/fstab with:
sudo mount -a

There should be no mount errors, and executing sudo df -h should list /dev/sdb1 as being mounted at /syslog


Configure vilogger 

Log into the vMA using the vi-admin account and configure the vilogger utility:

1.    Edit vilogger’s config file:
sudo nano /etc/vmware/vMA/vMA.conf

2.    Change the location entry for <vMALogCollector> to:
<location>/syslog</location>

3.    Restart the vilogger daemon:
sudo service vmware-vilogd restart


Add Target Servers

Servers must be added to the vMA before logs can be collected from them. Follow these steps to add a server, verify it is added and then enable logging. Either FQDN’s or IP addresses can be used for the hosts.

1.    The following command will add the host to the vMA:
sudo vifp addserver <HostName.FQDN.com>

Enter the password for root on the host to continue.

2.    Confirm the host has been added correctly with:
 vifp listservers

3.    Configure vilogger to collect the host logs:
vilogger enable --server <HostName.FQDN.com>  --numrotation 20 --maxfilesize 10 --collectionperiod 10

If no --server entry is used then all hosts being managed by the vMA will be added or updated to use the settings specified. So if you need to add a lot of hosts then complete step 1 for each server and then step 2 & 3.

4.    Confirm logs are being collected by listing the files in the log directory:
dir /syslog/<HostName.FQDN.com> 

5.    Real time logging can be done by using the following command:
tail -f /syslog/<HostName.FQDN.com>/vpxa.log


vilogger options
The following are the options available when adding a host for log collection.
Server
This specifies the hostname or IP address of the vMA target. If this option is omitted then all vMA target are added.
logname
This specifies the log files to capture. The default value is to enable capture of all files.
collectionperiod
This specifies how often log files are collected. The default value is 10 seconds. Entries between 10 and 3600 are valid.
maxfilesize
This option sets the maximum size for log files before rollover. The default value is 5MB. Entries between 1 and 1024 are valid.
numrotation
This option sets the number of log files to keep before the oldest is deleted. The default value is 5. Entries between 1 and 1024 are valid.


Weblinks