Network redundancy and Load balancing can be achieved by NIC teaming. It is group of multiple NIC which is represented as a single Logical NIC. It can be configured on vSwictch and DvSwitch. It required minimum two adaptors.
Step by Step NIC teaming configuration:
Login to vSphere Web Client
Select the required Host and Cluster
Select the vSwitch on which you need NIC teaming
ESXi Host > Manage > Networking > Virtual Switch
Click on Edit
Select teaming and failover
Choose the appropriate load balancing policy
Choose the Network Failure Detection
Select Yes or No from the 3rdand 4th options
Choose the appropriate failover order as per your requirement.
vSphere HA is a technique to protect virtual machine in case of Hardware failure of Host machine. Virtual machines are moved to another Host as soon as ESXi host become irresponsive isolated or any hardware failure occurred. It is a reactive mechanism of HA which is used to provide Virtual machine availability with a short interruption of services running on VM.
HA cluster in vSphere 6.5 is Proactive, with the help web client plug-in by OEM vendors (Cisco, Dell and HP). This Plug-in monitors Network, Power, Memory, Local Storage and Fan of the Server. Failure can be Moderate or Severe which are defined by vendors, not by VMware.
Below are the proactive responses of HA:
· Quarantine mode for all failures
· Quarantine mode for moderate and Maintenance mode for severe failures
· Maintenance mode for all failures
Note: Once the root cause is resolved Host will be taken away from Quarantine Mode but that is not the same case with Maintenance mode i.e it’s a manual task in MM.
Vendors plug-in recognizes the partial failure and makes an API call to vCenter, based on that vCenter will trigger one of the configured response.
Enhanced Linked Mode with External PSC and No High Availability.
Embedded mode it a good choice for small Infra but with environment which is having bigger Infra we have to go with Enhanced Linked mode. There are few deployment scenarios which we need to know.
With Enhanced Linked Mode we can connect multiple vCenter Servers together by using single or multiple PSC. It make possible to view and search across all the linked vCenter Servers and also replicate roles, permissions, licenses, policies and tags.
External PSC must be installed before vCenter Server or Appliance are deployed. While installing we need to be clear whether it is a vCenter Single Sign-On domain or joining to an existing domain. If PSC and vCenter SSO is already installed we can select to join an existing vCenter Single Sign-On domain. After joining Infra data between existing PSC and new PSC will be replicated.
This will result in fewer resource consumed by combing the services and more vCenter instance are allowed with centralized management. However connectivity loss between PSC and vCenter will cause major outage of hosted services. Also if there is any outage in PSC and it is not having HA than the entire vCenter server connected to PSC would face the outage.
There was no confirmation received about my registration in vForum Mumbai. In fact have got this below mail from vForum team which is basically as good as getting your seat cancelled.
And you know what, this was just before a day of vForum. But after getting to know about the on-site registration for the event from one of VMware vExpert friend. I managed to reached the venue second day.
Session got started with a bag of goodies which was missed by meL.
Highlight of the event was parallel sessions for different set of VMware technologies which covered the topic starting from basics to deep dive.
There were two Tech. Sessions (Tech. Session 1 and Tech. Session 2), One Educational Services and One Hands on Labs (HOL) all of this session were going on parallel.
Have utilized my presence for “Educational Services sessions” it was indeed a great session to have a deep dive which started with basics concept clearance and walkthrough more like a day workshop. This session covered the topics such as NSX, NSX-T, vRA, vRO, Compute Virtualization, Openstack integration and VMware Cloud on AWS.
For attending “Educational Services sessions” we are going to get the participation certificate by VMware (Informed by the VMware authorities)
I would like to say it was a great day to learn and explore new feature and technology related to VMware.
My advice to VMware:
Ø Direct / Separate registration for vExpert like we have for vmworld.
Ø More stall for technical interaction, discussion, quiz etc.
VIC is a feature in vSphere that enables VMware administrator to create “container hosts” that are integrated with vSphere. VIC uses existing vSphere constructs thus allowing you to build on your existing vSphere investment.
Requirements that you must verify before proceeding with the deployment:
Make sure your vCenter Server appliances and ESXi hosts have their DNS and NTP configured! You do not want any skewed clocks between vCenter and the VIC appliance.
vCenter Server 6.0 or 6.5, managing a cluster of ESXi 6.0 or 6.5 hosts, with VMware vSphere Distributed Resource Scheduler™ (DRS) enabled.
vSphere Integrated Containers Engine requires a vSphere Enterprise Plus license.
All of the ESXi hosts in a cluster require an appropriate license. Deployment fails if your environment includes one or more ESXi hosts that have inadequate licenses.
Have access to shared storage to allow VCHs to use more than one host in the cluster.
Deploy appliance to a vCenter Server that meets the following (minimum) system requirements.
ESXi 6.0 or 6.5 hosts
2 vCPU / 8 GB of memory
80 GB of free space in the target datastore
Firewall· Allow outbound TCP traffic to port 2377 on the endpoint VM, for use by the interactive container shell. Allow inbound HTTPS/TCP traffic on port 443, for uploading to and downloading from datastores.
Deploy VIC appliance VM
Integrate VIC with vCenter server
Deploy VCH vAPP (You would need a bridge network and a public network. Consider bridge network as a separate isolated network with separate VLAN from where the container will take IP addresses. And if you just want to use the local registry for images you may provide public network as management network)
Use the Harbor registry to push/pull the images
Deploy Containers VM’s in the VCH
Some useful link here for downloading and Installing vSphere Integrated Containers:
The best practices for setting up each of the main compute resources—disk storage, I/O, CPUs, and memory—when preparing them to run virtualized Hadoop-based workloads. This data should be read as an introduction to these best-practice areas.
The sum of all the memory size configured in the VMs on a server should not exceed the size of physical memory on the host server. Reserve about 5-6% of total server memory for ESXi; use the remainder for the virtual machines.
The physical CPUs on the vSphere host should not be overcommitted. One viable approach here is that the total number of vCPUs configured across all VMs on a host server is equal to the physical core count on that server. This more conservative approach ensures that no vCPU is waiting for a physical CPU to be available before it can execute. If that type of waiting were to occur, the administrator would see a sustained increase in %Ready time as measured by the vSphere performance tools.
When hyperthreading is enabled at the BIOS level, as is recommended, the total number of vCPUs in all VMs on a host server can be set up to be equal to twice the number of physical cores—that is, equal to the number of logical cores on the server. This “exactly committed” approach is used in demanding situations where the best performance is a requirement. Both the conservative method and the match-to-logical-core method are viable approaches, with the latter being seen as the more aggressive of the two in achieving performance results.
VMs whose vCPU count fits within the number of cores in a CPU socket, and that exclusively use the associated NUMA memory for that socket, have been shown to perform better than larger VMs that span multiple sockets. The recommendation is to limit the vCPUs in any VM to a number that is less than or equal to the number of cores in a CPU socket on the target hardware. This prevents the VM from being spread across multiple CPU sockets and can help it perform more efficiently
Create 1 or more virtual machines per NUMA node.
Limit the number of disks per DataNode to maximize the utilization of each disk – 4 to 6 is a good starting point.
Use eager-zeroed thick Disk VMDKs along with the ext4 filesystem inside the guest.
Use the VMware Paravirtual SCSI (pvscsi) adapter for disk controllers; use all 4 virtual SCSI controllers available in vSphere 6.0.
Use dedicated network switches for the Hadoop cluster if possible and ensure that all physical servers are connected to a ToR switch. Use the vmxnet3 network driver; configure virtual switches with MTU=9000 for jumbo frames.
Use a network that has the bandwidth of at least 10 GB per second to connect servers running virtualized Hadoop workloads.
When configuring ESXi host networking, consider the traffic and loading requirements of the following consumers, each port should be connected to a separate switch for optimizing network usage.
The management network
VM port groups
IP storage (NFS, iSCSI, FCoE)
Configure the guest operating system for Hadoop performance including enabling jumbo IP frames, reducing swappiness, and disabling transparent hugepage compaction.
Place Hadoop master roles, ZooKeeper, and journal nodes on three virtual machines for optimum performance and to enable high availability.
Dedicate the worker nodes to run only the HDFS DataNode, YARN NodeManager, and Spark Executor roles.
Use the Hadoop rack awareness feature to place virtual machines belonging to the same physical host in the same rack for optimized HDFS block placement.
Run the Hive Metastore in a separate MySQL database.
Set the Yarn cluster container memory and vcores to slightly overcommit both resources.
Adjust the task memory and vcore requirement to optimize the number of maps and reduces for each application.
Above details are provided in more details in below articles.
A mandatory rule limits HA, DRS and the user in such a way that a virtual machine may not be powered on or moved to a ESX host that does not belong to the associated DRS host group.
2. Should run rules (Preferential)
A preferential rule defines a preference to DRS to run virtual machine on the host specified in the associated DRS host group.
· Must run on hosts in group:
· The VM Group must run on the hosts in this group. If the selected hosts are down, the VMs will be down and not be restarted on a different host.
· If you have applications with special license agreements, you might have to use this option.
· Should run on hosts in group:
· The VM Group should run on the hosts in the group. However, in case of a vSphere HA event, this rule will be overwritten in order to keep the VMs running.
· Must Not run on hosts in group:
· The VM Group will not run on the specific hosts group. Under no circumstances will the VMs be moved to the specified host group. The VMs will rather be down than moved to this host group.
· Should Not run on hosts in group:
· The VM Group should not run on the hosts in the group. However, in case of a vSphere HA event, this rule will be overwritten in order to keep the VMs running.
The way HA treat preferential rules? VMHA respects obey mandatory rules when placing virtual machines after a host failover.
· It can only place virtual machines on the ESX hosts that are specified in the DRS host group.
· DRS does not communicate the existence of preferential rules to HA, therefore HA is not aware of these rules. HA cannot prevent placing the virtual machine on a ESX host that is not a part of the DRS host group, thereby violating the affinity rule. DRS will correct this violation during the next invocation.
DRS preferential rules? During a DRS invocation, DRS runs the algorithm with preferential rules as mandatory rules and will evaluate the result. If the result contains violations of cluster constraints; such as over-reserving a host or over-utilizing a host leading to 100% CPU or Memory utilization, the preferential rules will be dropped and the algorithm is run again.
Finally it depends on our requirement for Service and Infrastructure availability but the most important thing is to have a clear understanding for this rules before implementing them.