Saturday, December 13, 2014

VMware difference between

1. What is the difference between the vSphere ESX and ESXi architectures?

VMware ESX and ESXi are both bare metal hypervisor architectures that install directly on the server hardware.
Although neither hypervisor architectures relies on an OS for resource management, the vSphere ESX architecture relied on a Linux operating system, called the Console OS (COS) or service console, to perform two management functions: executing scripts and installing third-party agents for hardware monitoring, backup or systems management.
In the vSphere ESXi architecture, the service console has been removed. The smaller code base of vSphere ESXi represents a smaller “attack surface” and less code to patch, improving reliability and security.
find the below screenshot for more details - 


2. What is the difference between clone and template in VMware?

Clone
  • A clone is a copy of virtual machine.
  • You cannot convert back the cloned Virtual Machine.
  • A Clone of a Virtual Machine can be created when the Virtual Machine is powered on
  • Cloning can be done in two ways namely Full Clone and Linked Clone.
  • A full clone is an independent copy of a virtual machine that shares nothing with the parent virtual machine after the cloning operation. Ongoing operation of a full clone is entirely separate from the parent virtual machine.
  • A linked clone is a copy of a virtual machine that shares virtual disks with the parent virtual machine in an ongoing manner. This conserves disk space, and allows multiple virtual machines to use the same software installation.
  • Cloning a virtual machine can save time if you are deploying many similar virtual machines. You can create, configure, and install software on a single virtual machine, and then clone it multiple times, rather than creating and configuring each virtual machine individually.

Template
  • A template is a master copy or a baseline image of a virtual machine that can be used to create many clones.
  • Templates cannot be powered on or edited, and are more difficult to alter than ordinary virtual machine.
  • You can convert the template back to Virtual Machine to update the base template with the latest released patches and updates and to install or upgrade any software and again convert back to template to be used for future deployment of Virtual Machines with the latest patches.
  • Convert virtual Machine to template cannot be performed, when Virtual machine is powered on.  Only Clone to Template can be performed when the Virtual Machine is powered on.
  • A template offers a more secure way of preserving a virtual machine configuration that you want to deploy many times.
  • When you clone a virtual machine or deploy a virtual machine from a template, the resulting cloned virtual machine is independent of the original virtual machine or template


3. What is the difference between Thick provision Lazy Zeroed, Thick provision Eager Zeroed and Thin provision?

Thick Provision Lazy Zeroed
  • Creates a virtual disk in a default thick format.
  • Space required for the virtual disk is allocated when the virtual disk is created.
  • Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
  • Using the default flat virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space.
  • You cannot convert a flat disk to a thin disk.

Thick Provision Eager Zeroed
  • A type of thick virtual disk that supports clustering features such as Fault Tolerance.
  • Space required for the virtual disk is allocated at creation time.
  • In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created.
  • It might take much longer to create disks in this format than to create other types of disks.

Thin Provision

  • It provides on on-demand allocation of blocks of data.
  • All the space allocated at the time of creation of virtual disk is not utilized on the hard disk, rather only the size with utilized data is locked and the size increases as the amount of data is increased on the disk.
  • With thin provisioning, storage capacity utilization efficiency can be automatically driven up towards 100% with very little administrative overhead.

4.Comparison and Difference between VMFS 3 and VMFS 5

New Unified 1MB File Block Size
Earlier versions of VMFS used 1, 2, 4 or 8MB file blocks. These larger blocks were needed to create large files (>256GB). These different file blocks sizes are no longer needed to create large files on VMFS-5. Very large files can now be created on VMFS-5 using the new unified 1MB file blocks. Earlier versions of VMFS will still have to use larger file blocks to create large files.

Large Single Extent Volumes
In earlier versions of VMFS, the largest single extent was 2TB - 512 bytes. An extent is a partition on which one can place a VMFS. To create a 64TB VMFS-5, one needed to create 32 x 2TB extents/partitions and join them together. With VMFS-5, this limit for a single extent/partition has been increased to 64TB.

Smaller Sub-Blocks
VMFS-5 introduces smaller sub-blocks. Sub-blocks are now 8KB rather than 64KB as used in the earlier versions. With VMFS-5, small files (< 8KB, but > 1KB) in size will consume only 8KB rather than 64KB. This will reduce the amount of disk space stranded by small files. Also, there are many more sub-blocks in VMFS-5 than there were in VMFS-3 (32,000 on VMFS-5 compared to approximately 4,000 on VMFS-3).

Small File Support
VMFS-5 introduces support for very small files. For files less than or equal to 1KB, VMFS-5 uses the file descriptor location in the metadata for storage rather than file blocks. When these files grow beyond 1KB, they will then start to use the new 8KB sub-blocks.

Increased File Count
VMFS-5 introduces support for greater than 120,000 files, a four-fold increase when compared to the number of files supported on VMFS-3, which was approximately 30,000.

GPT
VMFS-5 now uses GPT partition table rather that MBR table as used by earlier version of VMFS extending the maximum partition size to 64TB which was limited to 2TB in earlier verions of VMFS.


Limitation of upgrading filesystem from VMFS-3 to VMFS-5
While a VMFS-3 which is upgraded to VMFS-5 provides you with most of the capabilities as a newly created VMFS-5, there are some differences.

No Uniform Block Size
VMFS-5 upgraded from VMFS-3 continues to use the previous file-block size, which may be larger than the unified 1MB file-block size.

No New Sub-Block Size
VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not the new 8KB sub-blocks. This can also lead to stranded/unused disk space. The upgraded VMFS-5 also continues to use the original number of sub-blocks from the VMFS-3.

No Increase to the Maximum Number of Files per Datastore
VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30,720 rather than new file limit of > 100,000 for newly created VMFS-5.

Uses MBR
VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume has grown beyond 2TB, it automatically and seamlessly switches from MBR to GPT (GUID Partition Table) with no impact on the running virtual machines.

Starts on Sector 128
VMFS-5 upgraded from VMFS-3 continues to have its partition starting on sector 128. Newly created VMFS-5 partitions will have their partition starting at sector 2048.

5.Difference between VMware ESX and ESXi

What is VMware ESX ?

ESX (Elastic Sky X) is the VMware’s enterprise server virtualization platform. In ESX, VMkernel is the virtualization kernel which is managed by a console operating system which is also called as Service console. Which is linux based and its main purpose is it to provide a Management interface for the host and lot of management agents and other thrid party software agents are installed on the service console to provide  the functionalists like hardware management and monitoring of ESX hypervisor.                                                                                                                                         
Graphic Thanks to VMware

What is VMware ESXi ?

ESXi (Elastic sky X Integrated) is also the VMware’s enterprise server virtualization platform. In ESXi, Service console is removed. All the VMware related agents and third party agents such as management and monitoring agents can also run directly on the VMkernel. ESXi is ultra-thin architecture which is highly reliable and its small code-base allows it to be more secure with less codes to patch. ESXi uses Direct Console User Interface (DCUI) instead of a service console to perform management of ESXi server. ESXi installation will happen very quickly as compared to ESX installation.
Graphic Thanks to VMware

Difference between ESX and ESXi

 ESX 4.1 is the last version availability of ESX server. After vSphere 5, only ESXi is available.  This comparison based out of the VMware Article


Capability
ESX     
ESXi
Service Console
Present
Removed
Troubleshooting performed via
Service Console            
ESXi Shell
Active Director Authentication
Enabled
Enabled
Secure Syslog
Not Supported
Supported
Management Network
Service Console Interface
VMKernel Interface
Jumbo Frames
Supported
Supported
Hardware Montioring
3 rd Party agents installed in Service console
Via CIM Providers
Boot from SAN
Supported in ESX
Supported in ESXi
Software patches and updates
Needed as smilar to linux operation system
Few pacthes because of small footprint and more secure
vSphere web Access
Only experimental
Full managenet capability via vSPhere web client
Locked Down Mode
Not present
Present . Lockdown mode prevents remote users to login to the host
Scripted Installtion
Supported
Supported
vMA Support
Yes
Yes
Major Administration command-line Command
esxcfg-
esxcli
Rapid deployment via Auto Deploy
Not supported
Supported
Custom Image creation
Not supported
Supported
VMkernel Network Used for
vMotion,Fault Tolarance,Stoarge Connectivity
Management Network , vMotion, Fault Tolarance, Stoarge Connectivity, ISCSI port binding

 6.Difference between ESX 3.5 and ESX 4.0

Here is the Post with the Difference between ESX 3.5 & ESX 4. I believe this would be the one of the definite questions in interview. Will working towards to bring the differences between all editions of ESX version.

–>

Features
ESX 3.5
ESX 4
Linked Mode
No Linked Mode Support
Linked Mode Supportis introduced in vSphere 4.0
Host Profiles
No Host Profiles
Host Profiles is Introduced in vSphere 4.0
Centralized License
Require dedicated License server
License can be managed within vCenter server
Performance chart
Yes
Lot More enhancements
Events and Alarms
Yes
Lot More enhancements
Fault tolerance
Not Available
Available from vSphere 4.0
Storage VMotion
SVMotion available only with CLI
SVMotion available in GUI
VMotion
Yes
Yes
Virtual CPUs per host
192
512
Virtual Machines per host
170
320
Logical processors per host
32
64
RAM per host
256 GB
1 TB
Maximum Service console Memory
800 MB
800 MB
DRS
Yes
Yes
VMware Data Recovery
Backup using VCB (VMware Consolidated backup)
VMware Data Recovery and VCB support
Enhanced VMotion Compatibility  (EVC)
No EVC
EVC is introduced in vSphere 4.0
VMware HA Admission Control
Yes but without options to reserve  failover capacity
Admission Control is improved to provide more flexible configuration options to reserve failover capacity.
High Availability Clustering with Windows Server 2000, 2003, 2008
Not Available
Available  in vSphere 4.0
Serial Attached SCSI (SAS)
Not Available
Available to support MSCS on win 2008
Hosts per storage volume
32
64
Fiber Channel paths to LUN
32
16
NFS Datastores
32
64
Hardware iSCSI initiators per host
2
4
Virtual Machine Hot Add Support
NO
Yes
Number of virtual CPUs per  virtual machine
4
8
 Virtual Hardware version
4
7
RAM per virtual machine
64 GB
255 GB
 virtual machine swap file Size
65532MB
255GB
VMDirectPath for Virtual Machines
NO
Yes
 Vmkernel
32 Bit
64 bit
Service Console
32 Bit
64 bit
Concurrent remote console sessions
10
40
Virtual Disk Thin Provisioning
No Thin Provisioning
Thin Provisioning introduced in vSphere 4.0
VMware Paravirtualized SCSI (PVSCSI
Not available
High-performance storage adapters that offer greater throughput and lower CPU utilization for virtual machines
Hot Extend for Virtual Disks
Only Via VCLI using vmkfstools
Available via GUI
Hot plug support for virtual devices
No
Yes
VMXNET Generation 3
Not Available
Yes
vNetwork Distributed Switch
Not Available
Available from vSphere with Enterprise Plus License
Private VLAN Support
Not Available
Available  with DVSwitch
Network Vmotion
Not Available
Available with DVSwitch
3rd Party Distributed Switch Support
Not Available
We can use Cisco Nexus 1000v with DVSwitch
IPv6 Support
Yes
Yes
NICs per VM
4
10
Standard vSwitches per host
127
248
Virtual NICs per standard vSwitch
1016
4088
8 way SMP
No Only 4 way
Yes
Update Manager
Yes
Yes
DPM
Experimental
Fully supported with PMI and iLO Remote Power On
License Types
VMware Infrastructure Foundation VMware Infrastructure Standard VMware Infrastructure Enterprise
vSphere  Essentials vSphere Essentials Plus vSphere  Standard vSphere Advanced vSphere Enterprise vSphere Enterprise Plus

 

7.Difference between vSphere 4.1 and vSphere 5


I am getting lot of request from email to post the difference between the vSphere 4.0 and vSphere 5.0. Here the post for the requests and I believe this could be the definite questions in interviews and this post helps you to just quickly review the difference between the features of this 2 vSphere Releases.

Features
vSphere 4.1
vSphere 5.0
Hypervisor
ESX & ESXi
Only ESXi
VMA
Yes VMA 4.1
Yes VMA 5
HA Agent
AAM
Automatic Availability
Manager
FDM
Fault Domain Manager
HA Host Approach
Primary & Secondary
Master & Slave
HA Failure Detection
Management N/W
Management N/W and Storage
communication
HA Log File
/etc/opt/vmware/AAM
/etc/opt/vmware/FDM
DNS Dependent on DNS
Yes
NO
Host UEFI boot support
NO
boot systems from hard drives, CD/DVD drives, or USB media
Storage DRS
Not Available
Yes
VM Affinity & Anti-Affinity
Available
Available
VMDK  Affinity & Anti-Affinity
Not Available
Available
Profile driven storage
Not Available
Available
VMFS version
VMFS-3
VMFS-5
VSphere Storage Appliance
Not Available
Available
Iscsi  Port Binding
Can be only done via Cli
using ESXCLI
 Configure dependent
hardware iSCSI and software
iSCSI adapters along with the
network configurations and
port binding in a single dialog
 box using the vSphere Client.
Storage I/O control for NFS
Fiber Channel
Fiber Channel & NFS
Storage Vmotion Snapshot support
VM with Snapshot cannot be migrated using Storage vMotion
VM with Snapshot can be migrated using Storage vMotion
Swap to SSD
NO
Yes
Network I/O control
Yes
Yes with enhancement
ESXi firewall
Not Available
Yes
vCenter Linux Support
Not Available
vCenter Virtual Appliance
vSphere Full Client
Yes
Yes
vSphere Web Client
Yes
yes with lot of improvements
VM Hardware Version
7
8
Virtual CPU per VM
8 vCpu
32 vCpu
Virtual Machine RAM
255 GB
1 TB of vRAM
VM Swapfile size
255 GB
1 TB
Support for Client connected USB
Not Available
Yes
Non Hardware Accelerated
3D grpahics support
Not Available
Yes
UEFI Virtual BIOS
Not Available
Yes
VMware Tools Version
4.1
5
Mutlicore vCpu
Not Available
Yes  configure at VM setting
MAC OS Guest Support
Not Available
Apple Mac OS X Server 10.6
Smart card reader support for VM
Not Available
Yes
Auto Deploy
Not Available
Yes
Image Builder
Not Available
Yes
VM’s per host
320
512
Max Logical Cpu per Host
160
160
RAM per Host
1 TB
2 TB
MAX RAM for Service Console
800 MB
Not Applicable (NO SC)
LUNS per Server
256
256
Metro Vmotion
Round-trip latencies of up to
5 milliseconds.
Round-trip latencies of up to
10 milliseconds. This provides better performance over
long latency networks
Storage Vmotion
Moving VM Files using moving to using dirty block tracking
Moving VM Files using I/O
mirroring with better enhancements
Virtual Distributed Switch
Yes
Yes with more enhancements
like deeper view into virtual machine traffic through Netflow and enhances monitoring and troubleshooting capabilities through SPAN and LLDP
USB 3.0 Support
NO
Yes
Host Per vCenter
1000
1000
Powered on virtual machines
 per vCenter Server
10000
10000
Vmkernel
64-bit
64-bit
Service Console
64-bit
Not Applicable (NO SC)
Licensing
vSphere Essentials
vSphere Essentials Plus
vSphere Standard
vSphere Advanced
vSphere Enterprise
vSphere Enterprise Plus
vSphere Essentials
vSphere Essentials Plus
vSphere Standard
vSphere Enterprise
vSphere Enterprise Plus

8. Difference Between VMFS 3 and VMFS 5

This post explains you the major difference between VMFS 3 and VMFS 5. VMFS 5 is available as part of vSphere 5. VMFS 5 is introduced with lot of performance enhancements. Newly installed ESXi 5 will be formatted with VMFS 5 version but if you have upgraded the ESX 4 or ESX 4.1 to ESXi 5, then datastore version will be VMFS 3 only. You will able to upgrade the VMFS 3 to VMFS 5 via vSphere client once ESXi upgrade is Complete. This posts tells you some major differences between VMFS 3 and VMFS 5  


Capability
VMFS 3
VMFS 5
Maximum single Extend size
2 TB  less 512 bytes
64 TB
Partition Style
MBR (Master Boot Record) style
GPT (GUID Partition Table)
Available Block Size
1 MB/2MB/4MB/8MB
 only 1 MB
Maximum size of RDM in
Virtual Compatibiltiy
2 TB  less 512 bytes
2 TB  less 512 bytes
Maximum size of RDM in
Phsical Compatibiltiy
2 TB  less 512 bytes
64 TB
Supported Hosts versions
ESX/ESX 3.X, 4.X & 5.x
Only ESXi 5 is supported
Spanned Volume size
64 TB (32 extends with max
size of extent is 2 TB)
64 TB (32 extends with
any size combination)
Upgrade path
VMFS 3 to VMFS 5
Latest Version. NO upgarde
 available yet.
File Limit
30,000
100,000
Sub-Block size
64 KB
8 KB

9.Difference between Upgraded VMFS 5 and Newly created VMFS5

This post explains you the major difference between VMFS 5 datastore upgrade from VMFS 3 and  newly created VMFS 5 datastore. VMFS 5 is available as part of vSphere 5. VMFS 5 is introduced with lot of performance enhancements. Newly installed ESXi 5 will be formatted with VMFS 5 version but if you have upgraded the ESX 4 or ESX 4.1 to ESXi 5, then datastore version will be VMFS 3 only. You will able to upgrade the VMFS 3 to VMFS 5 via vSphere client once ESXi upgrade is Complete. Even though the upgraded datastore will be with VMFS 5 version  but there are many technical difference between upgraded VMFS 5 and newly created VMFS 5. This posts tells you some major differences between upgraded VMFS 5 and  newly created VMFS 5.
 

Capabilities
Upgraded VMFS 5
Newly Created VMFS 5
Max datastore size
64 TB
64 TB
Maximum size of RDM in
Physical Compatibility
64 TB
64 TB
Block Size
Continuous to use the previous block size
which may be 1MB/2MB/4MB/8 MB
only 1 MB block size
Sub-block size
64 KB sub-blocks
8 KB  sub-blocks
File limit
30,000
100,000
Partition Style
MBR (Master Boot Record) style
GPT (GUID Partition Table)
Partition Sector
 partition starting on sector 128
 partition starting on sector 2,048
Maximum size of RDM in
Virtual Compatibility
2 TB  less 512 bytes
2 TB  less 512 bytes
Max size of file
2 TB  less 512 bytes
2 TB  less 512 bytes

10. Difference between vpxa, vpxd and hostd.

hostd is an app that runs in the Service Console that is responsible for managing most of the operations on the ESX machine.  It knows about all the VMs that are registered on that host, the luns/vmfs volumes visible by the host, what the VMs are doing, etc.  Most all commands or operations come down from VC through it.  i.e, powering on a VM, VM vMotion, VM creation, etc.
vpxa also runs on the Service Console and talks to VC.  I believe it acts as an intermediary between VC and hostd. I think it also does some housekeeping on the ESX host, but not as much as hostd.

Vmware hostd and vpxa on ESXi 5.X

 

HOSTD
The vmware-hostd management service is the main communication channel between ESX/ESXi hosts and VMkernel. If vmware-hostd fails, ESX/ESXi hosts disconnects from vCenter Server/VirtualCenter and cannot be managed, even if you try to connect to the ESX/ESXi host directly. It knows about all the VMs that are registered on that host, the luns/vmfs volumes visible by the host, what the VMs are doing, etc. Most all commands or operations come down from VC through it. i.e, powering on a VM, VM vMotion, VM creation, etc.
Restart the management agent /etc/init.d/hostd restart
VPXA
It acts as an intermediary between VC and hostd. The vCenter Server Agent, also referred to as vpxa or the vmware-vpxa service, is what allows a vCenter Server to connect to a ESX host. Specifically, vpxa is the communication conduit to the hostd, which in turn communicates to the ESX kernel. Restart the vpxa service
/etc/init.d/vpxa restart
Note:- If you have SSH enabled on your ESXi server these services can also be restarted and even if these are restarted by you then also your SSH session will not be impacted.
VPXD-It is Vcenter Server Service. If this service is stopped then we will not able to connect to Vcenter Server via Vsphere client.
VPXA-It is the agent of Vcenter server. also known as mini vcenter server which is installed on the each esx server which is managed by Vcenter server. What are the management action we are performing on top of the vcenter server. (Like:- Increasing/Decreasing RAM & HDD, Making any type of changes in cluster, doing vmotion. This agent collects all information from the vcenter server and pass this information to the kernal of the esx server.
HOSTD- This is the agent of ESX server, here VPXA pass the information to the HOSTD and hostd pass the information to ESX server.
In ESX, you have only hostd and (if you have vCenter) vpxa.
These are daemon (services) for remote management:
  • hostd is used to remote management using VIC
  • vpxa is used by vCenter (the vpxd part of vCenter) to remote manament
hostd is the daemon for direct VIC connection (when you use Virtual Infra Client (VIC) to connect to your ESX).
Also,
  • vpxa is the VC agent (ESX side)
  • vpxd is the VC daemon (VC side)

explained in different way -

what is VPXD, VPXA and HOSTD ?

VPXD-It is Vcenter Server Service. If this service is stopped then we will not able to connect to Vcenter Server via Vsphere client.

VPXA-It is the agent of Vcenter server. also known as mini vcenter server which is installed on the each esx server which is managed by Vcenter server. What are the management action we are performing on top of the vcenter server. (Like:- Increasing/Decreasing RAM & HDD, Making any type of changes in cluster,
doing vmotion. This agent collects all information from the vcenter server and pass this information to the kernal of the esx server.

HOSTD- This is the agent of ESX server, here VPXA pass the information to the HOSTD and hostd pass the information to ESX server.

 


Difference between Standard Switch and Distributed switch

In this post, I have explained the major differences about standard and distributed switch.
                             VS



Features
Standard Switch
Distributed Switch
Management
Standard switch needs to managed at each individual
host level
Provides centralized management and
 monitoring of the network configuration
 of all the ESXi hosts that are
associated with the dvswitch.
Licensing
Standard Switch is available for all
 Licensing Edition
Distributed switch is only available for
enterprise edition of licensing
Creation & configuration
Standard switch can be created and
 configured at ESX/ESXi host level
Distributed switch can be created and configured
 at the vCenter server level
Layer 2 Switch
Yes, can forward Layer 2 frames
Yes, can forward Layer 2 frames
VLAN segmentation
Yes
Yes
802.1Q tagging
Can use and understand 802.1q
VLAN tagging
Can use and understand 802.1q
VLAN tagging
NIC teaming
Yes, can utilize multiple uplink to
 form NIC teaming
Yes, can utilize multiple uplink to form
 NIC teaming
Outbound Traffic Shaping
Can be achieved using standard switch
Can be achieved using distributed switch
Inbound Traffic Shaping
Not available as part of standard
switches
Only possible at distributed switch
VM port blocking
Not available as part of standard
 switches
Only possible at distributed switch
Private VLAN
Not available
PVLAN can be created as part of dvswitch. 3 types of PVLAN(Promiscuous
Community and Isolated)
Load based Teaming
Not available
Can be achieved using distributed switch
Network vMotion
Not available
Can be achieved using distributed switch
Per Port policy setting
Policy can be applied at switch
and port group
Policy can be applied at switch, port group and even per port level
NetFlow
Not available
Yes
Port Mirroring
Not available
Yes

1 comment: