Startup Guide for KVM on CentOS 6

I recently made the leap from CentOS 5.6 to CentOS 6 on my primary KVM host, and had to modify how I setup the KVM host to begin hosting virtual machines. Below is a start to finish guide to get you hosting VMs using KVM. These instructions are very specific to CentOS 6.

For this I assume you have setup your server using the “Minimal” option when installing CentOS 6. You must also have the Virtualization features enabled for your CPU. This is done in your host’s BIOS.

Optionally you can skip the first section, Installing KVM, if you check all 4 “Virtualization” software categories during the install.

Installing KVM

If you choose the “Minimal” option during CentOS 6 then this step is necessary. To get the full set of tools there are 4 software groups to install…

  • Virtualization
  • Virtualization Client
  • Virtualization Platform
  • Virtualization Tools

To install run

yum groupinstall "Virtualization*"

dejavu-lgc-sans-fonts is necessary or all the fonts in virt-manager will show as squares

yum install dejavu-lgc-sans-fonts

Once the install is finished verify that the KVM kernel module is loaded.

lsmod | grep kvm

You should see either kvm_intel or kvm_amd depending on your host’s CPU manufacturer.

At this point I chose to reboot the server. This allows services to be started and udev rules for KVM to be applied. This will also allow dbus to create the machine-id file, otherwise you would see something like the below when running virt-manager

# virt-manager
Xlib:  extension "RANDR" missing on display "localhost:10.0". process 1869: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open /var/lib/dbus/machine-id": No such file or directory See the manual page for dbus-uuidgen to correct this issue. D-Bus not built with -rdynamic so unable to print a backtrace Aborted

If you receive that D-Bus error and would prefer not to restart then run this command to generate the necessary machine-id file

dbus-uuidgen > /var/lib/dbus/machine-id

Final configuration steps

The server I run KVM on is headless, but I still like using virt-manager. So we must install the necessary tools to do X11 forwarding through SSH.

yum install xorg-x11-xauth

# If you plan to use VNC to connect to the virtual machine's console locally
yum install tigervnc

Now when you connect through SSH be sure to pass the -X flag to enable X11 forwarding.

Optional: Using an alternate location for VM images with SELinux

With SELinux enabled, special steps must be taken to change the default VM store from/var/lib/libvirt/images. My particular server I choose to keep all images and ISOs for VMs under /vmstore. The steps below give your new store the correct security context for SELinux.

# this package is necessary to run semanage
yum install policycoreutils-pythonsemanage fcontext -a -t virt_image_t "/vmstore(/.*)?"
restorecon -R /vmstore

To activate this store you must open virt-manager, select your host, then do Edit-> Host Details. Under the Storage tab you can add your new storage volume.

Optional : Network Bridging for Virtual Machines

If you wish for your virtual machines to be accessible remotely then you must use network bridging to share your host’s network interface with the virtual machines. The setup requires linking one of your host’s physical interfaces with a bridge device. First copy your physical interface’s ifcfg file to create the new bridge device, named br0.

cp /etc/sysconfig/networking-scripts/ifcfg-eth0 /etc/sysconfig/networking-scripts/ifcfg-br0

Modify ifcfg-br0 to have the IP information in ifcfg-eth0 and remove, or comment out, that information in ifcfg-eth0. Below are examples of ifcfg-eth0 and ifcfg-br0. The highlighted lines are important.

/etc/sysconfig/networking-scripts/ifcfg-eth0

DEVICE=eth0
HWADDR=00:18:8B:58:07:3B
ONBOOT=yes
BRIDGE=br0

/etc/sysconfig/networking-scripts/ifcfg-br0

DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.1.0.3
NETMASK=255.255.255.0

Once those two files are configured restart the network service

service network restart

Optional: Managing libvirt with standard user account

Beginning in CentOS 6 access to managing libvirt is handled by PolicyKit. It’s always a good practice to do your daily administration tasks as some user besides root, and using PolicyKit you can give access to libvirt functions to a standard account.

First we create the necessary config file to define the access controls. The file must begin with a numeric value and have the .pkla extension.

vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla

Here’s an example of the file I used to give access to a single user. Be sure to put your desired username in place of username on the highlighted line.

[libvirt Management Access]
Identity=unix-user:username
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

  • You can optionally replace Identity=unix-user:username with Identity=unix-group:groupname to allow access to a group of users.

Finally restart the libvirtd daemon to apply your changes.

/etc/init.d/libvirtd restart

Creating the first virtual machine

You are now ready to create your virtual machines.

Create the virtual disk

With the version of virt-manager shipped with CentOS 6 you cannot create qcow2 images from within the GUI. If you wish to create your new VM with a qcow2 format virtual disk you must do so from the command line, or see the next section for RPMs to upgrade virt-manager.

Update: Through some testing I’ve found that performance can be greatly improved if the preallocation is set when creating a qcow2 image.

# With preallocation
qemu-img create -f qcow2 -o preallocation=metadata CentOS-6.0-x86_64-Template.qcow2 20G

# Without preallocation
qemu-img create -f qcow2 CentOS-6.0-x86_64-Template.qcow2 20G

  • NOTE: Replace the filename “CentOS-6.0-x86_64-Template” with your desired name, and also replace “20G” with the desired max size of the virtual disk.

Now when creating your virtual machine select to use an existing virtual disk.

Source: itscblog.tamu.edu

goedkope energie

Fedora 16 Arrives, Cloud Friendly

Cloud and virtualization technologies now feature in latest version release

The Red Hat sponsored Fedora Project has announced availability of version 16 of the free and open-source operating system distribution. Fedora 16 now features “Aeolus Conductor” as a feature enhancement designed to create and manage cloud instances across a variety of cloud types. OpenStack tools have also been included to help configure and run cloud compute and storage infrastructures; new too is HekaFS, a technology based on GlusterFS to enable cloud-ready distributed parallel filesystems. Lastly in the cloud category, the Pacemaker-cloud application service has now been incorporated to improve availability.

Red Hat reminds us that Fedora Project developers collaborate closely with upstream free software project teams to try and provide the best experience for users; that access and integration of new features can hopefully lead to wider and further innovation. The Fedora Project aims to release a new version of its free operating system approximately every 6 months. According to Red Hat, this rapid development cycle encourages collaboration and the inclusion of the latest, most cutting-edge open-source features available. Fedora is built by community members from across the globe, and the Fedora Project’s transparent and open collaboration process has attracted more than 24,000 registered contributors.

In terms of virtualization technologies, Fedora 16 has been peppered with SPICE USB to offer sharing and audio volume messaging for virtualized desktops; Virtual Machines Lock Manager protects users from starting the same virtual machine twice or adding the same disk to two different virtual machines; and Virt-manager Guest Inspection allows read-only browsing of guest filesystems and the Windows Registry.

“The open source community sets a new bar for technical excellence in the creation of this release,” said leader of the Fedora Project Jared Smith. “Fedora 16 combines the newest advancements in open source virtualized and cloud computing environments with significant under-the hood-improvements — all while continuing to improve the operating system’s usability. The Fedora Project’s commitment to advancing free and open source software is absolutely reflected in what the community delivered in Fedora 16.”

Fedora 13 Expands Linux Virtualization

Virtualization technology has long found a home in Red Hat’s Fedora community Linux distribution. Ever since Fedora 4 emerged in 2005, virtualization technologies have continued to advance in the distro and that remains the case with the upcoming Fedora 13 release set for later this month.

Unlike Fedora’s early virtualization features, which all leveraged the Xen open source technology, more recent Fedora releases have relied on KVM. New KVM performance and scalability features for virtualization will debut in Fedora 13 that will help to push the envelope for large-scale virtualization deployments.

“If you look at Linux virtualization features, Fedora has always been the vanguard for virtualization,” Fedora Project Leader Paul Frields told InternetNews. “We were putting out KVM before anyone else and we were interested in KVM as it seemed like a much more upstream-friendly feature. Although Xen was definitely a virtualization focus for a few years, Xen had some drawbacks.”

Frields noted that from Fedora’s perspective, Xen had become a drain on resources for developers since it took a lot of work to get Xen to work together with the Linux kernel for a Fedora distribution release. He added that, in his view, the code base for Xen didn’t track exactly with the upstream Linux kernel and as a result, there was a mismatch.

“KVM changed all of that because of the fact that it is part of the upstream Linux kernel,” Frields said. “It has allowed us to focus our resources to devote more time in advancing the usability of virtualization.”

Among the new KVM features that will debut in Fedora 13 are KVM Stable PCI Addresses and Virt Shared Network Interface technologies. Having stable PCI addresses will enable virtual guests to retain PCI addresses’ space on a host machine. The shared network interface technology enables virtual machines to use the same physical network interface cards (NICs) as the underlying operating system.

Frields explained that those two new features will make it easier for administrators to automate their work.

“If you’re trying to automate the creation of machines and the way that they share particular bus connections on a host machine, you want to be able to definitely connect it to a particular bus,” Frields said. “When you can predict that, you can take advantage of a greater scale of automation.”

Another new virtualization feature debuting in Fedora 13 is the ^7Frields8^, which is about delivering improved performance. The ^9Frields10^ technology is intended to lower the CPU requirement for Advanced Programmable Interrupt Controller access, or APIC (define), which is used for program timers.

While Fedora is including the new advanced features for scaling virtualization, Frields doesn’t necessarily expect that Fedora will be the platform used for large-scale deployments.

“Fedora is a way for people to have a bit of a crystal ball where they can look into the future of Red Hat Enterprise Linux,” Frields said.

Red Hat recently released the first beta for Red Hat Enterprise Linux 6 (RHEL 6). As is the case in Fedora, RHEL 6 no longer includes Xen, but instead leverages KVM as the key virtualization technology for Linux. Features that first debuted in Fedora releases are now finding a home in RHEL 6.

“When people look at RHEL 6, they will be seeing the very recent past and present of Fedora,” Frields said. “The RHEL roadmap is always oriented towards long-term stability while Fedora will move on and forge new paths and will help define Red Hat Enterprise Linux 7 at some point in the future.”

Sean Michael Kerner is a senior editor at InternetNews.com, the news service of Internet.com, the network for technology professionals.

Source: earthweb.com