Setup KVM – make CentOS a Virtualization Host

This article explains how to setup your CentOS Linux host so you use it as a Virtualization Host with KVM.

If you have a home server that is powerful enough to host a couple of virtual machines, you don’t have to go all ESXi to make your home server a VM Server. Instead you can use your existing CentOS installation with libvirt and KVM to host virtual machines.

Making sure you can run virtualization you need to verify your CPU support. Use the following command line to find out if your CPU supports virtualization. We can use egrep to go through /proc/cpuinfo to look for the appearance of either vmx (Intel) or svm (AMD).

> egrep '(vmx|svm)' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts
...

In the example above you can see one core being reported as eligible. We’re good to go and can install the group package for virtualization:

> sudo yum group install "Virtualization Host"


Making sure a normal user can administrate virtualization a policy needs to be created with polkit. Please make sure to replace andreas with your username:

> sudo groupadd virt
> sudo usermod -aG virt andreas
> sudo mkdir -p /etc/polkit-1/localauthority/50-local.d/

Now create the file /etc/polkit-1/localauthority/50-local.d/50-org.example.libvirt-access.pkla with the following contents:

[libvirt Admin Access]
Identity=unix-group:virt
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

That’s it already. On a different host you could install the virt-manager GUI to administrate your VMs from remote. Just use the following command on a third-party CentOS workstation:

> sudo yum install virt-manager

After that start the virtualization manager from the Desktop.

Login to Centos via LDAP user (using NSLCD)

This article explains how to setup your Linux host so you can login to it using a username and password from a LDAP server, making local users and passwords at the Linux host unnecessary.

If you want to administrate more than one Linux installation, you can either memorize a list of different user/password combinations per Linux host or use an identical user (with the same password) across all systems.

Both options are not much fun and a third option would be to install an LDAP server somewhere in the network and to authenticate all the Linux hosts against such LDAP server. That way users that are configured in the LDAP server can be allowed to login at any Linux host in the network.

There are a couple of configuration settings at a Linux host required to make this happen. These are the steps:

First of all a few packages must be installed that allow the system to become a LDAP client.

> sudo yum -y install openldap-clients openldap nss-pam-ldapd

Centos (or Red Hat) Linux offers two daemons that provide access to authentication providers (SSSD and NSLCD). In our case we decide to go with the legacy implementation, that is NSLCD. The authconfig tool will help us to configure what kind of data store to use for user credentials.

That being said, for old legacy ldap server support we have to use authconfig and pass the enableforcelegacy option. With that option we make sure SSSD is not being used implicitly, not even for a potentially supported configuration. The sssd daemon stopped and nslcd daemon started.

> sudo authconfig --enableforcelegacy --update

After issuing the above command you can check the status of the nslcd.service, as it should have been started already.

> systemctl status nslcd.service 
● nslcd.service - Naming services LDAP client daemon.
   Loaded: loaded (/usr/lib/systemd/system/nslcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-08-19 21:13:39 CEST; 3s ago
  Process: 10498 ExecStart=/usr/sbin/nslcd (code=exited, status=0/SUCCESS)
 Main PID: 10499 (nslcd)
   CGroup: /system.slice/nslcd.service
           └─10499 /usr/sbin/nslcd


Next we need the details of our LDAP server and structure within.

We need to know the name (or ip address) at which our LDAP server responds. In our example we do not use the LDAPS with TLS, but plain LDAP (at port 389). For the example let’s assume that the LDAP server responds at the your.domain.com URL.

In addition we need to know the base dn, which is the base point from which any LDAP query will be executed. In our example let’s assume the value dc=domain,dc=com.

With these two we can use the authconfig tool to make some configuration settings for us.

> sudo authconfig --enableldap --enableldapauth --ldapserver="your.domain.com" --ldapbasedn="dc=domain,dc=com" --update


The command above has altered configuration in two files: /etc/openldap/ldap.conf and /etc/nslcd.conf. While the configuration in /etc/openldap/ldap.conf will be sufficient for what we want to do, we have to add some more configuration to /etc/nslcd.conf in addition.

Please use your favorite editor, open up /etc/nslcd.conf and insert the following lines:

binddn uid=readeraccount,ou=people,dc=domain,dc=com
bindpw yourpasswordhere

# own filter for finding passwords
filter passwd (uid=*)


With the above changes we accomplish two things:

  1. we tell NSLCD to use a specific (non anonymous) user to perform all queries at the LDAP server and
  2. we narrow the search filter for finding a user’s password in the LDAP server.

NSLCD (our LDAP client) starts a session by connecting to the LDAP server. If not stated otherwise, this connection will be unauthenticated (anonymous bind). This is typically not allowed in most production environments, hence the client must provide a bind user and password. We configure an authenticated (Simple) BIND by specifying the user (binddn) and password (bindpw).

When an LDAP client requests information about a resource, it performs one or more resource queries depending on what it is looking up. Search queries sent to the LDAP server are created using the configured search base, filter, and the desired entry (uid=myuser) being searched for. If the LDAP directory is large, this search may take a significant amount of time. It is a good idea to define a more specific search base for the common maps such as passwd.

After these changes we have to restart the nslcd.service:

> sudo systemctl restart nslcd.service 


Last but not least we should confirm our setup by attempting to get some user information from LDAP via our Linux console:

> getent passwd john
john:x:1000001:1000000:John Doe:/home/john:


Please note that the home directory is not the local home directory but the set home directory at the LDAP server. Although you could already successfully log in to the Linux host with the user “john”, there will be no home directory for john. Connecting the home directory is not covered in this article.

Creating a RAID with MDADM

Sometimes you like to just quickly create a linux raid but don’t care too much about the fine tuning. One of such use cases might be when you try to run performance tests on a number of possible raid setups but don’t want to go through the motions every time manually. The following script does exactly do that.

#!/bin/bash
DEVICES="/dev/sdb /dev/sdc /dev/sdd /dev/sde"
NO_OF_DEVICES=4
LEVEL="10"
NO_OF_DATA_DEVICES=2
ARRAY=/dev/md0
CHUNKS="512"
MOUNT=/tmpraid

echo "==============================="
echo "* RAID $LEVEL - Chunk $CHUNKS"
echo "* about to unmount existing file system at $MOUNT"
umount $MOUNT >> /dev/null 2>&1
echo "* about to stop array $ARRAY"
mdadm -S $ARRAY >> /dev/null 2>&1
echo "* about to zero all MDADM devices $DEVICES"
mdadm --zero $DEVICES >> /dev/null 2>&1
echo "* about to create new array $ARRAY"
yes | mdadm --create $ARRAY --assume-clean --level $LEVEL --chunk $CHUNKS --name myRaid${LEVEL}-${CHUNKS}k --raid-devices $NO_OF_DEVICES $DEVICES >> /dev/null 2>&1
echo "* about to create xfs file system on array $ARRAY"
mkfs.xfs $ARRAY -f -d su=${CHUNKS}k -d sw=$NO_OF_DATA_DEVICES 2>&1 >> /dev/null
echo "* about to mount file system at $MOUNT"
mount $ARRAY $MOUNT
echo "===============================\n"
mdadm --query --detail $ARRAY

Running the script requires superuser privileges. At the top of the script there are a number of variables that can be used for configuration. Important to note here is that depending on the raid level the number of data devices will vary. In the example above the raid 10 configuration has 4 devices. Given how raid 10 works (half the devices are data devices), there are 2 data devices in that case. However, in a raid 5 setup there is one less data device than the number of devices. In case of setting up a raid 5 with 4 devices the configuration ought to look like the following.

NO_OF_DEVICES=4
LEVEL="5"
NO_OF_DATA_DEVICES=3