VirtWithKVM1

From T2B Wiki
Jump to navigation Jump to search

KVM installation and configuration on dom04

Short description of the software and hardware

The KVM host will be named dom04.wn. The operating system will be Scientific Linux 6.1 x86_64. There are 9 ports :

  • iLO3 port (IP 192.168.9.224) -> use this address in your browser to access the iLO3 Web pages of the machine;
  • eth0 -> eth3 : bnx2 NIC
  • eth4 -> eth7 : igb NIC

Since the rule is to always take the first available NIC as the public one, and since we will use the bnx2 NICs, then eth0 will be the public NIC and eth1 will be the private one :

eth0 --> public, no IP address;
eth1 --> private, IP 192.168.10.37

The machine is an HP ProLiant DL360 G7 with 2 CPUs, each CPU having 6 hyperthreaded cores (12 cores from the OS point of view) and 72 GB of memory. There are four RAID-1 systems of 500GB each.

OS installation

  • Installation was done through the iLO Integrated Remote Console
  • Installation media : SL61 DVD iso's, mounted on the machine via a Virtual DVD Drive (in the Remote Console, menu "Virtual Drives" > sub-menu "CD/DVD" > select option "Virtual Image" > select the *.iso file)
  • Disk space layout : one volume group using the first RAID-1 systems (sda); we will use sdb, sdc and sdd as physical disks for virtual machines. (See next section for more details)

Disk space layout in details

During the SL installation process, the wizard will ask you to add the RAID systems on which you want to install the OS. If you add all the 4 RAIDs, then the installation process will create a volume group that will spread accross the 4 RAID systems, and that's not what we want ! The volume group for the OS should be contained in the first RAID (physical device /dev/sda), and the other three RAIDs should exists as physical devices (/dev/sdb, /dev/sdc, and /dev/sdd) outside the volume group. This way, the three last RAID systems can be used as physical disks for the virtual machines.

Now speaking about the default partioning layout, if you let the installation process do the job for you, it will create three logical volumes : swap (virtual memory), home (mounted to /home) and root (mounted to /). The first device (sda) will be splitted in two physical partitions : sda1 mounted to /boot is outside the volume group, and sda2 is added to the volume group.

After the installation process is finished, you may want to correct the disk space layout. You can easily do it with the GUI called system-config-lvm :

yum install system-config-lvm
system-config-lvm

(since it is a graphical, don't forget to open you ssh session on dom04 with the option -XC).

The following screenshot shows what you should get :

Image(system-config-lvm1.png,50%)

In the picture above, you can see that there is no /home : it has been deleted because not useful, and the resulting unallocated space has been added to the root logical volume. If you do so, don't forget to remove the line corresponding to /home in /etc/fstab, otherwise, your machine won't be able to reboot (during the reboot, it will try to fscheck /home, resulting in an error, and you'll be obliged to restart the machine in rescue mode to correct the /etc/fstab) !

Network configuration

Here are the NICs as they are stated in iLO :

iLO3 iLO Dedicated Network Port 9c:8e:99:2e:c6:a6
iSCSI Port 1 9c:8e:99:2d:e1:e5
iSCSI Port 2 9c:8e:99:2d:e1:e7
iSCSI Port 3 9c:8e:99:2d:e1:dd
iSCSI Port 4 9c:8e:99:2d:e1:df
NIC Port 1 9c:8e:99:2d:e1:e4
NIC Port 2 9c:8e:99:2d:e1:e6
NIC Port 3 9c:8e:99:2d:e1:dc
NIC Port 4 9c:8e:99:2d:e1:de

In fact, in the table above, only the four bnx2 ports are visible (each port has two MAC addresses). There is still a PCI extension card providing four igb ports not visible in the iLO web pages. However, if you look at the output of dmesg, you should see blocks of lines like these :

igb 0000:06:00.1: irq 70 for MSI/MSI-X
igb 0000:06:00.1: Intel(R) Gigabit Ethernet Network Connection
igb 0000:06:00.1: eth5: (PCIe:5.0Gb/s:Width x4) f4:ce:46:a6:e5:59
igb 0000:06:00.1: eth5: PBA No: E84069-007
igb 0000:06:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)

Notice the different MAC address range used for igb interfaces.

eth4 f4:ce:46:a6:e5:58
eth5 f4:ce:46:a6:e5:59
eth6 f4:ce:46:a6:e5:5a
eth7 f4:ce:46:a6:e5:5b


Two bridges must be created : one for the public network and another for the private network.

Here is the content of the needed network scripts in /etc/sysconfig/network-scripts/ :

# cat ifcfg-eth0
----------------
DEVICE="eth0"
BOOTPROTO="static"
HWADDR="9C:8E:99:2D:E1:E4"
NM_CONTROLLED="no"
ONBOOT="yes"
BRIDGE="br0"
----------------

# cat ifcfg-br0
----------------
DEVICE="br0"
TYPE="Bridge"
NM_CONTROLLED="no"
BOOTPROTO="static"
ONBOOT="yes"
----------------

# cat ifcfg-eth1
----------------
DEVICE="eth1"
HWADDR="9C:8E:99:2D:E1:E6"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="static"
BRIDGE="br1"
----------------

# cat ifcfg-br1
----------------
DEVICE="br1"
TYPE="Bridge"
BOOTPROTO="static"
ONBOOT="yes"
IPADDR="192.168.10.37"
NETMASK="255.255.0.0"
----------------

After editing these files, restart the network :

service network restart

Now, let us control that everything is ok :

# brctl show
----------------
bridge name	bridge id		STP enabled	interfaces
br0		8000.9c8e992de1e4	no		eth0
br1		8000.9c8e992de1e6	no		eth1
----------------

# ifconfig
----------------
br0       Link encap:Ethernet  HWaddr 9C:8E:99:2D:E1:E4  
          inet6 addr: fe80::9e8e:99ff:fe2d:e1e4/64 Scope:Link
          ...

br1       Link encap:Ethernet  HWaddr 9C:8E:99:2D:E1:E6  
          inet addr:192.168.10.37  Bcast:192.168.255.255  Mask:255.255.0.0
          inet6 addr: fe80::9e8e:99ff:fe2d:e1e6/64 Scope:Link
          ...

eth0      Link encap:Ethernet  HWaddr 9C:8E:99:2D:E1:E4  
          inet6 addr: fe80::9e8e:99ff:fe2d:e1e4/64 Scope:Link
          ...

eth1      Link encap:Ethernet  HWaddr 9C:8E:99:2D:E1:E6  
          inet6 addr: fe80::9e8e:99ff:fe2d:e1e6/64 Scope:Link
          ...
----------------

Now, iptables must be configured to allow forwarding across the bridges :

# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
# service iptables save
# service iptables restart

SElinux must be disabled, otherwise you'll get strange error messages when trying to start your virtual machines :

/usr/sbin/setenforce 0
vim /etc/selinux/config
--> you must set the following : SELINUX=disabled

We still have to add a default route through ccq :

route add default gw 192.168.10.100

and to make this default route permanent, don't forget to add the following line in /etc/sysconfig/network :

GATEWAY=192.168.10.100

As a result, your root table should look like this :

[root@dom05 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0
link-local      *               255.255.0.0     U     1015   0        0 br0
link-local      *               255.255.0.0     U     1016   0        0 br1
192.168.0.0     *               255.255.0.0     U     0      0        0 br1
default         ccq.wn.iihe.ac. 0.0.0.0         UG    0      0        0 br1

And then configure the DNS resolution by adding the following in /etc/resolv.conf :

search wn.iihe.ac.be
nameserver 192.168.10.100
nameserver 193.190.247.140

We can now make the machine ready for SSH. Simply run :

ssh-keygen -t rsa

for key pair generation. It is also a good idea to add in .ssh/authorized_keys the public key of ccq, dom02, and dom03.

Don't forget to also add dom04.wn.iihe.ac.be in the private DNS on ccq.

RPMs installation

We simply did a :

yum groupinstall "Virtualization*"

This will install four software groups :

  • Virtualization
  • Virtualization Client
  • Virtualization Platform
  • Virtualization Tools

The following is also necessary for X forwarding with SSH :

yum install xorg-x11-xauth

After that, you must log off, and then log in back with the option -X (this log off and in is necessary to create the .Xauthority file).

Configuration of daemons

Start libvirtd :

service libvirtd start

Doing a "brctl show" or and an "ifconfig", you should see new interface virbr0.

Manage the storage pools

Launch virt-manager, right click on the virtual host (localhost), select the "Details" option in the menu, and the select the "Storage" tab in the new window. Now, we will add the three RAID systems (sdb, sdc, and sdd) to the storage pools. I will show in details the steps for the device sdb :

Image(system-config-lvm2.png,25%)

Click on the "+" button at the lower left corner of the previous window, this will launch a wizard. In the first window of the wizard, give a name to the new pool (for example "sdb"). On the next windows, fill in the fields like this :

Image(system-config-lvm3.png,25%)

Example of a VM installation from CD/DVD ISO

It should be the same as here

Example of a VM installation from PXE

It should be the same as here


Template:TracNotice