+--------------+ | FIREWALLS | +-------+------+ | | | | +-----------------+ | +------------------+ iSCSI +------------------+ | HOST 1 +-----------+ 10GB Switch +------------------+ | | | | +-------------------------------------+ +-----+ +-----------------+ | MGMT | SYNOLOGY | | | +------------------+ iSCSI +------------------+ | +-----------------+ +------+ 10GB SWITCH +-------------+ | HA LINK | HOST 2 | | +----------------------+ +-----------------------+ | | +----+ +--------------+ | | +-----------------+ | +------------------+ MGMT | +-----+ +------+ 1GB SWITCH | | SYNOLOGY | +-----------------+ | +------------------+ +------------------+ | HOST 3 | | | +----+ +------------------+ +-----------------+ +------+ 1GB SWITCH | | +------------------+ +-----------------+ | | HOST 4 | | | +----+ +-----------------+
Below a play-by-play of how I installed oVirt 4.2 Hosted Engine. I hope this process saves someone some time as it took some time to get it all working as advertised.
My Environment
As shown above, I have the following hardware:
- 4 Host Servers
- 128GB Ram
- 650GB SAS Raid 10 Local Storage
- 4 10GBe NICS
- 8 1GBe NICS
- Dual Xeon 1U IBM
- 2 10GB Smart Switches
- Redundant clones with VLAN for iSCSI MPIO and MGMT
- 2 1GB Smart Switches
- Redundant clones for all VM traffic
- 2 Synology RS3617xs+ in HA mode
- 4x 10GBe
- iSCSI with MPIO
- NFS4 shares
Step 1 – Install OS on first bare metal server
- Boot to installation CD (CentOS7)
- I used simple guided install options to create partitions
- Choose a single NIC to use as your management NIC. This can be changed at the end.
- Make sure to assign your FQDN hostname
- Once OS installed, reboot and login using ssh (ssh root@<ipaddress>) to IP address chosen on install. (Recommend setting up ssh login using keys)
- Once logged in using ssh to the host do the following:
yum -y update shutdown 0 -r ssh root@<ipaddress> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm yum install ovirt-hosted-engine-setup yum install ovirt-engine-appliance yum install screen shutdown 0 -r
- After the server reboots:
ssh root@<ipaddress> nmtui
-
- setup only the minimum for NICS
- You will need your 1 management NIC
- You will also possibly need iSCSI NICS
- In my environment, I need 2 iSCSI on VLAN 2 to talk to my SAN
- Once NICS configured – exit to command line
- setup only the minimum for NICS
- I had to enable MTU 9000 (Jumbo Frames) in both my nics and in my vlans and I also had to run this command for each nic:
ip link set dev <ethername> mtu 9000 service network restart nmcli device status #should show you your NICS
- You may need to “tweak the configs” by editing your profiles in /etc/sysconfig/network-scripts/ifcfg-*
- remember, any changes will require “service network restart”
- My 2 iSCSI NICS look like this
- TYPE=Ethernet
BOOTPROTO=none
NAME=<ethername>
UUID=<uuidhere>
DEVICE=<ethername-same-as-above>
ONBOOT=yes
MTU=9000
- TYPE=Ethernet
- My 2 iSCSI VLANS look like this
- DEVICE=<ethername-same-as-primary-nic>.2
VLAN=yes
ONBOOT=yes
IPADDR=<ip address of NIC>
NETMASK=255.255.255.0
BOOTPROTO=none
MTU=9000
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
- DEVICE=<ethername-same-as-primary-nic>.2
- test MTU 9000 (Jumbo Frames)
ping -M do -s 8972 <ip address>
- ** DO NOT CONTINUE UNTIL CONFIRMED JFRAMES ARE WORKING AS ABOVE (IF REQUIRED) **
-
vi /etc/hosts
confirm that BOTH the host and the engine FQDN and IP addresses are in here and are accurate! It doesnt matter if the DNS is all correct, the hosts file is used to resolve the system locally and DNS is not good enough
-
hostname -I
confirm all required IP adresses (for install only) are listed. You can add your bonds etc at the end. I have 12 NICS on each host and only setup 1-MGMT and 2-ISCSI, just enough to get started. Anytime I tried to add bonds prior to install, I had tonnes of failed install issues.
-
- Now we are ready
- !! Make sure your NICS are working and that you can ping all that is required, SAN, NFS, etc.
- Type the following at the command line on the host:
screen hosted-engine --deploy
- answer all of the questions carefully. I dont cover my answers here, but as long as all of the above was done correctly, the answers to the deploy were quite simple.
- Once you see the confirmation config, confirm the following:
- Bridge interface = the management NIC <ethername>
- Engine FQDN = the name that you added to your hosts file for the management URL for the GUI
- Bridge name = ovirtmgmt
- Host address = the FQDN of the host1 server as added to the hosts file
- Confirm your Storage is properly defined, iSCSI LUNS, IPs, NFS, etc
- If all looks good, proceed with install
- Remote into host using ssh and confirm a few things (ssh root@<ipaddress>):
- iSCSI – For my synology units, I had to
-
vi /etc/multipath.conf
- #Add to the bottom of the array
- device {
vendor “SYNOLOGY”
product “.*”
path_grouping_policy multibus
failback immediate
path_selector “round-robin 0”
rr_min_io 100
}
- device {
-
vi /etc/ovirt-hosted-engine/hosted-engine.conf
- remove any unwanted MPIO paths and ports
-
service multipathd restart multipath -ll iscsiadm --m session --op show
- should show your current paths as active for each target
-
- iSCSI – For my synology units, I had to
- Using the oVIRT WebGUI
- First thing you have to do is create a new data storage domain to store your VMs.
- I then had to create some network profiles using Network – Networks and adding the following:
- iSCSI-1 (used for MPIO)
- VLAN 2
- Uncheck VM Network
- MTU 9000
- Cluster – Uncheck Required
- iSCSI-2 (used for MPIO)
- VLAN 2
- Uncheck VM Network
- MTU 9000
- Cluster – Uncheck Required
- VMNIC1 (used for VM traffic)
- Check VM Network
- Cluster – Uncheck Required
- VMNIC2 (used for VM traffic)
- Check VM Network
- Cluster – Uncheck Required
- iSCSI-1 (used for MPIO)
- Then go to Compute – Hosts – Click on your Host and then click the Network Interfaces tab and “Setup Host Networks” button.
- Now you should create all of your network bonds by drag/drop your NICs onto one another. You can also edit any NIC settings that are required.
- I did the following
- Added a second NIC to the ovirtmgmt NIC in Active/Backup Bond
- Bonded 2 sets of nics for use in VMs
- Drag/Drop VMNIC1 and VMNIC2 to their respective Bonds
- Drag/Drop the iSCSI-1 and iSCSI-2 profiles onto the 2 NICS being used for iSCSI traffic
- Make sure to assign IP address to iSCSI-1 and iSCSI-2 profiles
- Click OK
- Now we need to create an iSCSI MPIO bond
- go to Compute – Data Centers
- click your Default Data Center
- click the iSCSI Multi Pathing tab
- Click Add
- Name: iscsibond0
- Check both:
- iSCSI-1
- iSCSI-2
- Check all paths to all iSCSCI targets to use on these paths
- Ok
- Now its time to create ISO, EXPORT, MIGRATE domains
- I created a NFS share on my Synology of /volume1/OVIRT/
- I then created:
- /volume1/OVIRT/ISO
- /volume1/OVIRT/EXPORT
- /volume1/OVIRT/MIGRATE
- Make sure NFS shares on NFS server have the following done:
- sudo chown -R 36:36 /volume1/OVIRT/
- sudo chmod 755 /volume1/OVIRT/ -R
- Then in the oVIRT GUI go to Storage – Domains
- New Domain
- Name: ISO
- Function: ISO
- Type: NFS
- Path: <ipaddress>:/volume1/OVIRT/ISO
- OK
- New Domain
- Name: EXPORT
- Function: EXPORT
- Type: NFS
- Path: <ipaddress>:/volume1/OVIRT/EXPORT
- OK
- New Domain
- Name: MIGRATE
- Function: DATA
- Type: NFS
- Path: <ipaddress>:/volume1/OVIRT/MIGRATE
- OK
This is as far as I have gotten as of 01/10/2018. I will continue to add to this as my experience progresses.