Loading...

Follow All about VMware on Feedspot

Continue with Google
Continue with Facebook
or

Valid


Sometimes the issue seems to be very difficult or I would say different but the Solution or work around for that is more easy. We faced similar issue in our production with the below mentioned error.


“Deprecated VMFS Volumes found on host. Please consider upgrading volume(s) to the latest version”. 




To work around this issue, restart the management agents on the impacted hosts to clear the warning using the below command.
Services.sh restart
Note: Restarting management agents does not impact the virtual machine power state. Also, ESXi host does not required any reboot.


Hope this was informative detail.

Keep Learning..... Keep Sharing..... Keep Growing.....




With regards,
Sayed
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Network redundancy and Load balancing can be achieved by NIC teaming. It is group of multiple NIC which is represented as a single Logical NIC. It can be configured on vSwictch and DvSwitch. It required minimum two adaptors.


Step by Step NIC teaming configuration:



  1. Login to vSphere Web Client
  2. Select the required Host and Cluster
  3. Select the vSwitch on which you need NIC teaming
  4. ESXi Host > Manage > Networking > Virtual Switch
  5. Click on Edit
  6. Select teaming and failover
  7. Choose the appropriate load balancing policy
  8. Choose the Network Failure Detection
  9. Select Yes or No from the 3rdand 4th options
  10. Choose the appropriate failover order as per your requirement.

Ok and Save - It's done simply


Keep Learning.....

                                  Keep Sharing.....

                                                                   Keep Growing

Hope this was informative.


With regards,
Sayed
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 




vSphere HA is a technique to protect virtual machine in case of Hardware failure of Host machine. Virtual machines are moved to another Host as soon as ESXi host become irresponsive isolated or any hardware failure occurred. It is a reactive mechanism of HA which is used to provide Virtual machine availability with a short interruption of services running on VM.



HA cluster in vSphere 6.5 is Proactive, with the help web client plug-in by OEM vendors (Cisco, Dell and HP). This Plug-in monitors Network, Power, Memory, Local Storage and Fan of the Server. Failure can be Moderate or Severe which are defined by vendors, not by VMware.

Below are the proactive responses of HA:

·         Quarantine mode for all failures

·         Quarantine mode for moderate and Maintenance mode for severe failures
·         Maintenance mode for all failures



Note: Once the root cause is resolved Host will be taken away from Quarantine Mode but that is not the same case with Maintenance mode i.e it’s a manual task in MM.

Vendors plug-in recognizes the partial failure and makes an API call to vCenter, based on that vCenter will trigger one of the configured response.


Hope this was informative for you :)



With regards,
Sayed

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

                                          Enhanced Linked Mode with External PSC and No High Availability.

Embedded mode it a good choice for small Infra but with environment which is having bigger Infra we have to go with Enhanced Linked mode. There are few deployment scenarios which we need to know.

With Enhanced Linked Mode we can connect multiple vCenter Servers together by using single or multiple PSC. It make possible to view and search across all the linked vCenter Servers and also replicate roles, permissions, licenses, policies and tags.

External PSC must be installed before vCenter Server or Appliance are deployed. While installing we need to be clear whether it is a vCenter Single Sign-On domain or joining to an existing domain. If PSC and vCenter SSO is already installed we can select to join an existing vCenter Single Sign-On domain. After joining Infra data between existing PSC and new PSC will be replicated. 

This will result in fewer resource consumed by combing the services and more vCenter instance are allowed with centralized management. However connectivity loss between PSC and vCenter will cause major outage of hosted services. Also if there is any outage in PSC and it is not having HA than the entire vCenter server connected to PSC would face the outage.


Keep Learning.....

                                                Keep Sharing.....
                                                                                                Keep Growing.....


With regards
Sayed

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

              My experience about VMware vForum Mumbai 2018 


There was no confirmation received about my registration in vForum Mumbai. In fact have got this below mail from vForum team which is basically as good as getting your seat cancelled.



And you know what, this was just before a day of vForum. But after getting to know about the on-site registration for the event from one of VMware vExpert friend. I managed to reached the venue second day.




Session got started with a bag of goodies which was missed by meL.


Highlight of the event was parallel sessions for different set of VMware technologies which covered the topic starting from basics to deep dive.

There were two Tech. Sessions (Tech. Session 1 and Tech. Session 2), One Educational Services and One Hands on Labs (HOL) all of this session were going on parallel.

Have utilized my presence for “Educational Services sessions” it was indeed a great session to have a deep dive which started with basics concept clearance and walkthrough more like a day workshop. This session covered the topics such as NSX, NSX-T, vRA, vRO, Compute Virtualization, Openstack integration and VMware Cloud on AWS. 

For attending “Educational Services sessions” we are going to get the participation certificate by VMware (Informed by the VMware authorities)





I would like to say it was a great day to learn and explore new feature and technology related to VMware.

My advice to VMware:


Ø Direct / Separate registration for vExpert like we have for vmworld.
Ø More stall for technical interaction, discussion, quiz etc.
Ø Paper presentation by vExpert.


Thank You VMware and Best of Luck J



vForum Mumbai 2018 - YouTube




If you find this article useful please do.....


Like!!!        Share!!!     &       Comment!!!


With regards,
Sayed






Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post covers a quick overview of VIC and simple deployment planFor more detail about VIC please refer below link:

VIC is a feature in vSphere that enables VMware administrator to create “container hosts” that are integrated with vSphere. VIC uses existing vSphere constructs thus allowing you to build on your existing vSphere investment.

Requirements that you must verify before proceeding with the deployment:

  • Make sure your vCenter Server appliances and ESXi hosts have their DNS and NTP configured! You do not want any skewed clocks between vCenter and the VIC appliance.
  • vCenter Server 6.0 or 6.5, managing a cluster of ESXi 6.0 or 6.5 hosts, with VMware vSphere Distributed Resource Scheduler™ (DRS) enabled.
  • vSphere Integrated Containers Engine requires a vSphere Enterprise Plus license.
  • All of the ESXi hosts in a cluster require an appropriate license. Deployment fails if your environment includes one or more ESXi hosts that have inadequate licenses.
  • Have access to shared storage to allow VCHs to use more than one host in the cluster.
  • Deploy appliance to a vCenter Server that meets the following (minimum) system requirements.
  • ESXi 6.0 or 6.5 hosts
  • 2 vCPU / 8 GB of memory
  • 80 GB of free space in the target datastore
  • DHCP server
  • Firewall·   Allow outbound TCP traffic to port 2377 on the endpoint VM, for use by the interactive container shell. Allow inbound HTTPS/TCP traffic on port 443, for uploading to and downloading from datastores.

Deployment plan:
  • Deploy VIC appliance VM
  • Integrate VIC with vCenter server
  • Deploy VCH vAPP (You would need a bridge network and a public network. Consider bridge network as a separate isolated network with separate VLAN from where the container will take IP addresses. And if you just want to use the local registry for images you may provide public network as management network)
  • Use the Harbor registry to push/pull the images
  • Deploy Containers VM’s in the VCH

Some useful link here for downloading and Installing vSphere Integrated Containers:


Note: Deploying VIC 1.2.1 to a standalone ESXi host is no longer supported, you must have vCenter Server installed.




Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The best practices for setting up each of the main compute resources—disk storage, I/O, CPUs, and memory—when preparing them to run virtualized Hadoop-based workloads. This data should be read as an introduction to these best-practice areas. 

  • The sum of all the memory size configured in the VMs on a server should not exceed the size of physical memory on the host server. Reserve about 5-6% of total server memory for ESXi; use the remainder for the virtual machines.
  • The physical CPUs on the vSphere host should not be overcommitted. One viable approach here is that the total number of vCPUs configured across all VMs on a host server is equal to the physical core count on that server. This more conservative approach ensures that no vCPU is waiting for a physical CPU to be available before it can execute. If that type of waiting were to occur, the administrator would see a sustained increase in %Ready time as measured by the vSphere performance tools. 
  • When hyperthreading is enabled at the BIOS level, as is recommended, the total number of vCPUs in all VMs on a host server can be set up to be equal to twice the number of physical cores—that is, equal to the number of logical cores on the server. This “exactly committed” approach is used in demanding situations where the best performance is a requirement. Both the conservative method and the match-to-logical-core method are viable approaches, with the latter being seen as the more aggressive of the two in achieving performance results.
  • VMs whose vCPU count fits within the number of cores in a CPU socket, and that exclusively use the associated NUMA memory for that socket, have been shown to perform better than larger VMs that span multiple sockets. The recommendation is to limit the vCPUs in any VM to a number that is less than or equal to the number of cores in a CPU socket on the target hardware. This prevents the VM from being spread across multiple CPU sockets and can help it perform more efficiently
  • Create 1 or more virtual machines per NUMA node.
  • Limit the number of disks per DataNode to maximize the utilization of each disk – 4 to 6 is a good starting point.
  • Use eager-zeroed thick Disk VMDKs along with the ext4 filesystem inside the guest.
  • Use the VMware Paravirtual SCSI (pvscsi) adapter for disk controllers; use all 4 virtual SCSI controllers available in vSphere 6.0.
  • Use dedicated network switches for the Hadoop cluster if possible and ensure that all physical servers are connected to a ToR switch. Use the vmxnet3 network driver; configure virtual switches with MTU=9000 for jumbo frames.
  • Use a network that has the bandwidth of at least 10 GB per second to connect servers running virtualized Hadoop workloads.
  • When configuring ESXi host networking, consider the traffic and loading requirements of the following consumers, each port should be connected to a separate switch for optimizing network usage.
  1. The management network
  2. VM port groups
  3. IP storage (NFS, iSCSI, FCoE)
  4. vSphere vMotion
  5. Fault tolerance
  • Configure the guest operating system for Hadoop performance including enabling jumbo IP frames, reducing swappiness, and disabling transparent hugepage compaction.
  • Place Hadoop master roles, ZooKeeper, and journal nodes on three virtual machines for optimum performance and to enable high availability.
  • Dedicate the worker nodes to run only the HDFS DataNode, YARN NodeManager, and Spark Executor roles.
  • Use the Hadoop rack awareness feature to place virtual machines belonging to the same physical host in the same rack for optimized HDFS block placement.
  • Run the Hive Metastore in a separate MySQL database.
  • Set the Yarn cluster container memory and vcores to slightly overcommit both resources.
  • Adjust the task memory and vcore requirement to optimize the number of maps and reduces for each application. 
Above details are provided in more details in below articles.


https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/bigdata-perf-vsphere6.pdf
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 




Yes agreed sometimes very basic stuffs are more confusing.
We all run so fast sometimes we forget the basics or in some cases as people new to the technology we don’t take the time to learn it properly.




VMware vSphere HA
vMotion is live migration of a VM from one host to another without service interruption. (No downtime in vMotion only single ping response will be lose)






VMware vMotion
HA will restart a VM from one host to other hosts available in the HA cluster,
Ex: suppose a host not working or issues related to hardware, Network Etc... (Downtime until VM is restarted to available host)



Hope this was informative and have clear the doubts.


If you like it do share it….. Be Social…..



Keep Learning…..
                                        Keep Sharing…..
                                                                           Keep Growing…..



With regards,
Sayed



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

VMware DRS Rules


1.    Must run rules (Mandatory)
A mandatory rule limits HA, DRS and the user in such a way that a virtual machine may not be powered on or moved to a ESX host that does not belong to the associated DRS host group.

2.    Should run rules (Preferential)
A preferential rule defines a preference to DRS to run virtual machine on the host specified in the associated DRS host group.

·        Must run on hosts in group:
·        The VM Group must run on the hosts in this group. If the selected hosts are down, the VMs will be down and not be restarted on a different host.
·        If you have applications with special license agreements, you might have to use this option.

·        Should run on hosts in group:
·        The VM Group should run on the hosts in the group. However, in case of a vSphere HA event, this rule will be overwritten in order to keep the VMs running.

·        Must Not run on hosts in group:
·        The VM Group will not run on the specific hosts group. Under no circumstances will the VMs be moved to the specified host group. The VMs will rather be down than moved to this host group.

·        Should Not run on hosts in group:
·        The VM Group should not run on the hosts in the group. However, in case of a vSphere HA event, this rule will be overwritten in order to keep the VMs running.



The way HA treat preferential rules?
VMHA respects obey mandatory rules when placing virtual machines after a host failover.
·       It can only place virtual machines on the ESX hosts that are specified in the DRS host group.
·       DRS does not communicate the existence of preferential rules to HA, therefore HA is not aware of these rules. HA cannot prevent placing the virtual machine on a ESX host that is not a part of the DRS host group, thereby violating the affinity rule. DRS will correct this violation during the next invocation.

DRS preferential rules?
During a DRS invocation, DRS runs the algorithm with preferential rules as mandatory rules and will evaluate the result. If the result contains violations of cluster constraints; such as over-reserving a host or over-utilizing a host leading to 100% CPU or Memory utilization, the preferential rules will be dropped and the algorithm is run again.

Finally it depends on our requirement for Service and Infrastructure availability but the most important thing is to have a clear understanding for this rules before implementing them.

Hope this was informative…..

Keep Learning…..
                          Keep Sharing…..
                                                   Keep Growing…..


With regards,
Sayed


Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview