Monthly Archives: July 2009

Một số hướng dẫn của WHO về đại dịch cúm H1N1

Dịch cúm A (H1N1) đang lan rộng và có nguy cơ trở thành đại dịch trên toàn cầu. Đứng trước tình hình đó, tổ chức Y tế thế giới đã đưa ra một số hướng dẫn để mọi người tham khảo nhằm giảm thiểu khả năng lây nhiễm loại virus cúm nguy hiểm này.

Mọi người nên tham khảo và chú ý phòng vệ cho bản thân để tránh lây nhiễm cúm A(H1N1)!


Contacts Organizer Zimlet: 5 ways to organize your contacts

Power Zimlet #3

If you have 100s or even 1000s of contacts and perhaps also using multiple address books and want to organize them, this one is for you. With lot of contacts also comes organization or maintenance, syncing and other issues.  For example, say you want to move all your company’s contacts into one address book so you can share company’s address book to someone,  or,  say file all of them by “(Company) First Name Last Name” format so its easy to sort them and differentiate them,  you will immediately see there is no easy way to do that.

And that’s where this Zimlet come in. Its  a very powerful and flexible Zimlet and provides 5 different ways (& several combination) to help organize your contacts. It also organizes across multiple Address books (simply use ctrl -key or Shift-key to select multiple folders).

1. Move or Cleanup:
– Move all contacts with xyz domain  in ALL address book folders into xyz Addressbook.

For example,  say  you want to move all gmail contacts to folder called ‘gmail friends’.  Assuming you already have an addessbook folder by name ‘gmail friends’,  here is how you would do that:


  1. Select “Contact’s email contains” menu,
  2. Enter “” in the next field
  3. Select all the folders using Shift key or ctrl key  from “in folder(s): ” menu
  4. Select ‘Move Contacts to:’ Radio button
  5. Select the folder ‘gmail friends’
  6. Press Organize

Other use cases:
–  Move some Contacts in ALL  Address Book to Trash

–  Move ALL  Contacts in Some Address Book to Trash

–  Move ALL  Contacts in ALL Address Book to Trash

2. Merge:

– Move all contacts  in multiple Address-books(say AB2, AB3 & AB4)  to a single address book(AB1)

3. “Sort and Store” aka “file-As”:

– Zimbra by default sorts contacts by last name but lot of people want to sort by Company and one of the way you can achieve this is by filing them as “(Company) Firstname Lastname” or “Company Lastname, FirstName” or “Company”

– You can use File-as Action to simply file all your contacts in a specific format for consistent appearance.

4. Tag:

– Tag all contacts that contains some domain(say with some tag(say: zimbra folks)

5. Contacts with Phone number(for mobile sync): This is one of the special actions I added to help mobile users to move all the contacts that has phone number to one folder. Which in-turn makes it easier to make phone calls.
e.g. move all contacts with phone numbers to “has phone number” Address book. Now, sync it to mobile phone and you can be sure to know that the contacts in that folder has some phone number.

Contacts Organizer

1. For more details and to download: Visit Gallery
2. Please make sure to to take backup of all your Address books before using this (from Preferences > Address Book > Export)

Fedora 9 End Of Life (EOL)

With the release of Fedora 11 now past us, it’s come time to remind
folks that per the release policy, maintenance for the N-2 Fedora
release ends one month after the Fedora N comes out.

In this case, since Fedora 11 just came out, that means that the end
of life for Fedora 9 will be 2009-07-10. After this point, no new
updates, including security updates, will be available for Fedora 9.
We strongly urge folks to upgrade to Fedora 11 in order to experience
the best and latest that Fedora has to offer.

So without delay, go get your copy of Leonidas today from and enjoy the latest version of Fedora!

Zimbra Collaboration Suite 5.0.18 has been released

We are pleased to announce: Version 5.0.18 of the Zimbra Collaboration Suite.

Key Enhancements:
21489 – CalDAV: support for calendar delegation (iCal 4/Snow Leopard/zimbraPrefAppleIcalDelegationEnabled)
33583 – Reduced duration of maintenance mode in backups (Just during MySQL & Lucene, not LDAP & first round of blobs.)
19702 – Trash/junk lifetime based on time message was moved instead of recieved zimbraMailPurgeUseChangeDateForTrash
06112 – ZCO: out of office assistant
29489 – ZCO: run local rules automatically (If Outlook & Zimbra filter act on same message, Outlook rule priority. Recommend disabling ‘Clear Categories on Mail’ under Tools > Rules & Alerts)

Notable Fixes:
21750 – Resource & location reservation across domains
30217 – Prevent outlook from sending mail on behalf of another person within the organization
38112 – Creating appointments in iCal does not send the invite
38711 – Calendar view going blank
38288 – PST migration Outlook ’07 SP2 crash
36923 – Tagging of appointments in ZCO causes failure in blackberries
39002/3, 39027, & 37540 – No need to re-apply the July critical security patch.

(Further details on PMweb.)

Save data, save time, save yourself from anguish: Take a backup before upgrading. Backup and Restore Articles – Zimbra :: Wiki

5.0.18 Network Edition Release Notes

5.0.18 Network Edition Downloads

5.0.18 Open Source Edition: Release Notes & Downloads

Subscribe to the blog for the latest news. Hope you enjoy this release!
-The Zimbra Team

Source: Zimbra Forums

List of top open source BPM / workflow solution

Every organization has their very own distinct business processes which differentiates them from their competitors.

Some companies have predefined processes while some have processes which are defined by the employees themselves. Imagine what would happen if each customer support representative have their own way of managing a customer. Without a proper process in place, calls from customers can go unanswered and can be transferred endlessly.

Lately, with the help of advance web based solutions, business processes and workflows can be managed through business process management (BPM) solutions. These solutions can be used to easily create applications automate processes such as:

  • Change management
  • Quality control
  • Customer service
  • Claims management
  • Complaint management
  • Procurement

There are many BPM / Workflow solutions out there. The following are three open source BPM / Workflow solutions for you to evaluate before trying the proprietary ones.


Intalio is an open source business process platform built around the standards-based Eclipse STP BPMN modeler and Apache ODE BPEL engine, both originally contributed by Intalio.

Intalio Enterprise provides all the components required for the design, deployment, and management of the most complex business processes which includes

  • BRE
  • BAM
  • Portal
  • ESB
  • ECM

Intalio is available in several editions but what we’re most interested in is Intalio’s free community edition. This edition is made of two components, Intalio Designer and Intalio Server.

Intalio Designer allows one to model  the business level processes for the model to be eventually deployed to Intalio Server. Intalio Designer is the only tool currently available on the market that allows any BPMN model to be turned into fully executable BPEL processes without having to write any code.

Intalio Server is a high-performance process engine that can support the most complex business processes, deployed within mission-critical environments.

If your organization is planning to automate business processes, Intalio should be in your list of consideration.


ProcessMaker is an open source business process management (BPM) and workflow software designed for small and mid-sized businesses (SMBs).

ProcessMaker is a user friendly solution to manage workflow effectively and efficiently .

Business users and process experts with no programming experience can design and run workflows, increase transparency, and radically reduce paperwork,  automate processes across systems, including human resources, finance, and operations.

With ProcessMaker you can easily create workflow maps, design custom forms, extract data from external data sources and many more key features to optimize workflow management and business operations.

One key advantage of ProcessMaker is the online library which provides many process templates for you to download and begin editing. Learning curve can also be reduced since you’re starting from one which is already readily built and tested. Some sample process templates include:

  • Credit card application
  • Expense report process
  • City district zoning request

Updated 12 April 2009: Read my initial review of ProcessMaker


Unlike Intalio and ProcessMaker, CuteFlow is a web based open source document circulation and workflow system.

Users are able to define “documents” which are send step by step to every station/user in a list. Imagine the scenario where a particular document is being sent to various parties for review and approval before it’s classified as a final document for submission purposes.

Cuteflow helps to automate the document circulation process within your office internal environment.

All operations like starting a workflow, tracking, workflow-definition or status observation can be done within a comfortable and easy to use webinterface.

Some key features of Cuteflow includes:

  • Free and Open Source!
  • Webbased User Interface
  • Integration of workflow documents in e-mail message
  • Unlimited amount of sender, fields, slots, receiver…)
  • Workflows can attach data and files
  • Flexible user management with substitutes

So there you go, some open source BPM / workflow solutions for your organization to begin automating business processes. One thing about automating a business process is to ensure that a process is already mature. To find out why, try reading Process maturity level as a principle factor for process automation (First Pillar). I do agree with many of the points stated. Just think about it. If a process is rather new and not fully tested, it’s bound to have some loopholes or problems in it. If there are real problems in an immature process, going ahead to automate it can speed up the problems. So do beware, while technology helps to speed things up, speeding up problems and loopholes can be disastrous.

Source: WarePrise

Recovering your Linux server with a Knoppix rescue disk

Among the many positive aspects of working with Linux, one is the excellent recovery methods. If your server doesn’t boot properly, you can still access everything on it using a recovery disk. Here you will learn how to do this using Knoppix. This article doesn’t focus on a particular version of Knoppix, and will work on almost all Linux distributions.Booting your server using a Knoppix rescue CD is easy. Just put the disk in your server’s optical drive and restart the server, next the Knoppix operating system starts loading automatically. But it doesn’t immediately give you access to the files on your hard drive. You have to mount all file systems on your server yourself — assuming you can still mount them. The procedure that is described in this article helps you in fixing boot problems that are not caused by file system errors. If your server’s file systems have errors that prevent them from being mounted, the procedure described in this article will help you find a solution, but there may be additional steps required.

Mounting the Linux file systems

To access the root file systems on your server using a Knoppix rescue CD, you’ll have to mount it. This is also true for other file systems on your server. When using a rescue system, you’ll have to mount the root directory on a temporary directory. Most distributions have a directory /mnt which exists for this purpose, so it’s a good idea to use it and mount your file system on it. But, there is a potential problem: most utilities assume that your configuration files are in a very specific directory; if your distribution is looking for /boot/grub/menu.lst for instance, the tools may be incapable of understanding that it is in /mnt/boot/grub/menu.lst instead. Therefore, you need to make sure that everything that is mounted on /mnt, is presented to the operating system as mounted directly in the / directory. The following procedure shows you how to do that.

  1. Boot your computer, using the Knoppix CD. You’ll see the Knoppix welcome screen next. From here, press Enter to start loading Knoppix.
  2. While loading, Knoppix will wait a while to show you all available languages. If you don’t select anything, English is started automatically. Once completely started, you’ll get access to the Knoppix desktop.
  3. To restore access to your server, you’ll need to open a terminal window from Knoppix. By default, after opening a terminal window you’ll get the access permissions of an ordinary user. To be able to repair your server, you need root permissions. You’ll get them using the sudo su command.
  4. Now use the mount command. This command shows you that currently no file systems are loaded at all, but everything you see is in a RAM drive.
  5. In case you don’t know exactly how storage in your server is organized, you’ll need to check what partitions and disks are used. The fdisk -l command gives a good start for that. This command shows you all disks that are available on your server (also if they are LUN’s offered by a SAN), and it will show you which partitions exist on these disks. The disk names typically start with /dev/sd (although other names may be used), and are followed by a letter. The first disk is /dev/sda, the second disk is /dev/sdb and so on. On the disks, you’ll find partitions that are numbered as well. For instance, /dev/sda1 is the first partition on the first disk on your server. Here is an example of what a typical disk layout may look like:Use fdisk -l to show the current disk layout of your server.
    ilulissat:/ # fdisk -l
    Disk /dev/sda: 8589 MB, 8589934592 bytes
    255 heads, 63 sectors/track, 1044 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14          30      136552+  82  Linux swap / Solaris
    /dev/sda3              31         553     4200997+  83  Linux
  6. Now it’s time to find out what exactly you are seeing. If it looks like the example above, it’s not too hard to find out which is the root file system. You can see that there are two partitions using partition type 83 (which means they contain a Linux file system). One of them however is only 12 cylinders, and as each cylinder is about 8 MB only, it’s too small to contain a root file system. The second partition is using partition type 82, so it contains a swap file system. Therefore, the only partition that can possibly contain the root file system, is /dev/sda3.
  7. Now that you know which partition contains the root file system, it’s time to mount it. As mentioned before, it’s a good idea to do that on the /mnt directory, Knoppix doesn’t use it for anything useful anyway. So in this case, the command to use would be mount /dev/sda3 /mnt
  8. A quick check should show you at this point that you have correctly mounted the root directory. Before you activate the chroot environment, you’ll need access to some system directories as well. Most important of them are /proc and /dev. These directories normally are created automatically when booting. That means they do exist in your Knoppix root directory, but once you’ve changed /mnt to become your new root directory, you’ll find them empty. As you really need /proc and /dev to fix your problems, mount them before doing anything else. The next two commands should help you mounting them.
    mount -o bind /dev /mnt/dev
    mount -t proc proc /mnt/proc
  9. Once you are at this point, your entire operating system is accessible from /mnt. You can verify this now, by activating the directory (use cd /mnt). At this point your prompt looks like root@Knoppix:/mnt#. Now use the command chroot . to make the current directory (.) your new root directory. This brings you to the real root of everything that is installed on your server’s hard drive.
  10. As Linux servers tend to use more than one partition, you may have to mount other partitions as well, before you can really fix all problems. If for instance the directory /usr is on another partition, you won’t be able to do anything before you have made that accessible as well. The only task to perform at this moment, is to find out which file system is mounted where exactly. There is however an easy answer to that question: /etc/fstab. In this file you’ll see exactly what is mounted when your server normally boots. So check the contents of /etc/fstab and perform all mounts defined in there manually. Or make yourself easy and use mount -a. This command will mount all file systems automatically which haven’t been mounted yet.

Now, you’ll have full access to all utilities on your server’s hard drive, and more important, to all files — time to analyze what went wrong and restore access. But make sure that you start by using a backup at this point!

To fix any problems on your computer, you have to make sure to restore full access to your system. You can do this by mounting all file systems on your computer, and after that by making them accessible by using the chroot command. This way you are ensured that all tools see the server’s file system as it really is, and that will make it a lot easier for you to restore access.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.

Fedora: A Hat with a History

fedora-logoFedora is a giant among giants, in the shadow of a giant from which it was born. But every giant is born of humble beginnings.

So to understand the giant, you first have to understand from where they came. So let me take you through a short history of Fedora, and show you where it all began, and some of the interesting, if not curious steps that it took to become what it is today.

To start with the very deepest roots, we need to look to the kernel that makes Fedora what it is: The Linux Kernel. That was first introduced in 1991 by a then college student named Linus Torvalds.

A short time later, the first Linux distros began appearing, starting with MCC Interim, which gave rise to SLS, which in turn was the grandparent to Slackware, the parent of numerous other major distros, including Suse Linux, and the uber geek distro of choice by many senior Linux users.

Red Hat, the parent of Fedora, was a little late to the game, as it didn’t get it’s official start until 1994. But just like Slackware, it quickly became the parent of numerous other distributions, including Caldera, Mandrake (later renamed to Mandriva), Red Flag, and even CentOS.

The original beta for Red Hat was released on July 29th, 1994. The official 1.0 release came out in May of 1995, and was interestingly enough, codenamed Mother’s Day. Whether or not it was actually released on Mother’s Day is a subject of fierce debate, but it is still an undeniably memorable achievement. The biggest reason for this is that Red Hat 1.0 actually beat Microsoft’s Windows 95 to market by almost 4 months, which is actually quite an achievement, considering initial development of Red Hat started after that for Windows 95.

After that, development was fast and furious, with four major versions, and a number of sub versions all being released within an amazing two and a half years. After that, the pace of development slowed into a more predictable, and methodical release schedule, with four more versions appearing gradually over the next five and a half years.

It was shortly after this period of relative quiet that the Fedora project was born. In march of 2003, Red Hat released version 9 of their Linux distribution, and shortly after began a move to migrate Red Hat development from an internal closed development (not closed source) system to an open development system. This gave rise to a separation of Red Hat Linux into Fedora Core and Red Hat Enterprise Linux (RHEL).

From mid 2003 to late 2006, Fedora Core continued forward, separate, yet still intimately connected to RHEL, with Fedora becoming more the step child of RHEL and the preferred distribution for SMB’s and home users, and RHEL remaining the undisputed choice for large enterprise. Then in late 2006, early 2007, a decision was made to shorten the name of Fedora Core to just plain old Fedora. Version 7 of Fedora Core released with the new, abbreviated name.

There has been three versions since the new name was adopted (7, 8 and 9), with the latest version having been released in May of 2008. Fedora 10 is already in the pipes and due to be released in 2009, and it will continue for Fedora what is a project that has been over fifteen years in the making.

But, despite it’s longevity, no distro is complete without a good desktop manager to go with it. For the people at Red Hat, and eventually Fedora, the choice was clear: The Gnome desktop would be just what the doctor ordered. Thus they adopted it as their desktop manager of choice. However, this didn’t occur until 1998, and even then was only offered as a choice for installation.

For anyone using Red Hat at the time, you typically had few choices in desktop environments, with KDE leading the way, and few reasons to want to use one, given that most people who used Red Hat didn’t need a graphical desktop. That’s because most users in 1998 were systems admins, and most Red Hat machines were servers.

But that didn’t stop the adoption of Gnome as the default desktop of choice for Red Hat. Fresh off the development presses, and still green around the ears, Gnome was first offered in Red Hat 5.1 as a preview release, but not officially added as a standard element of the distro until 1999 with the release of Red Hat 6.0.

Again, not many people were using Red Hat as a desktop OS at the time, however, the number had increased enough to warrant the inclusion of Gnome into the distribution. So why Gnome, and not the (at that time) more advanced KDE, or even one of the other window managers, such as AfterStep or TWM. The answer to that lies in the polish and feature set of each window manager.

While good, TWM, AfterStep and others were either too spartan, or lacked the polish that came with Gnome and KDE. Sure, they weren’t perfect themselves by any current standards, but they were a lot farther along than most other window managers or desktop environments.

Once the choice was made, Gnome continued to grow, actually gaining a big boost in developers, support, and even acceptance among the Linux world because of this, growing into the giant it is today. And all along the way, Red Hat, and later Fedora, have stayed strong with Gnome, not wavering from their dedication to it all the way through.

One interesting fact though about the relationship between Red Hat and Gnome, is that while Red Hat used and preferred Gnome as the primary desktop environment of choice for their distribution, they also included KDE as well, and offered it as an alternative for those who wanted Red Hat, but didn’t want the Gnome desktop. They still encouraged users to use and stick with the Gnome desktop, however they didn’t want to shut out anyone if possible, and thus included both.

When the Fedora project was born in 2003, Gnome 2.2 was already available and firmly entrenched in Red Hat, and thus has saved the Fedora developers the interesting experience of moving between major versions of a Window manager, even though that will come eventually in the not too distant future.

Overall, I myself prefer the KDE desktop environment, however, I’m also a firm believer in choosing what you want, and tweaking a distribution your way to meet your needs, therefore I’m quite intrigued and pleased to see Red Hat, and now Fedora, so openly supporting choices such as this in their distribution. Because it’s one thing to say you support Open Source, and something else to actually show it through your actions.

Because Open Source is about choice, and freedom, and the Fedora project (and Red Hat) have both done that in both big and small ways. That is why I believe that Fedora is a great distribution with a great future, and is most certainly a hat with a history.

Another interesting tidbit of history that actually spawned from Red Hat is the RPM package system. RPM stands for Red hat Package Manager. Although it’s not called that anymore, that is the source of it’s name. RPM is both a package management system under Red Hat (and later Fedora) as well as a software package format.

Essentially, the RPM file format is somewhat of a cross between an archive file, similar to Zip or Tar, and an installer file, as it contains both multiple files packaged together as a single file (with the extension .rpm), as well as the necessary information required to install and configure the included files.

The RPM package manager is actually a command line tool designed to take RPM files, unpack them, and then follow the included instructions to install everything to it’s proper location. This essentially works in much the same way as other package systems, like deb, ports and others.

But the RPM package manager doesn’t just install software, it can also uninstall, reinstall, verify, query, and update software packages on the system. RPM is also the standard package management system of a wide variety of other operating systems, such as Suse, CentOS, Mandriva, and more, making it a core standard package management system in the Linux world. It’s also part of the Linux Standard Base, making it easy for other distributions to include at will.

But RPM wasn’t always the default package manager for Red Hat, and later Fedora. Back in it’s early days, prior to version 2.0, Red Hat used a system called RPP. It was a command line tool providing simple installation, powerful query features, and and package verification, all necessary tools for the then emerging Red Hat.

But despite all these great features, RPP was doomed to fail from the beginning, because it was too tightly designed around Red Hat, leading to issues which forced Red Hat to release numerous versions of the same package system in each release just to deal with these issues. This relative inflexibility, along with some other issues, eventually killed RPP shortly after version 1.0 of Red Hat.

To solve the problems of RPP, a new package manager called PM was created, taking the best of RPP, PMS (package management system), and several others, and bundled them together. That system failed to produce good results either.

That’s where RPM came in. With the experience of two failed package management systems behind them, Red Hat developers were able to create RPM, which later launched with version 2.0. While not yet perfect, it was a far cry better than RPP, and a welcome change.

But as with any change, good or bad, some standards need to be created to ensure that consistency is maintained throughout the development process, as everything that is done to a package management system affects everyone from those on top, all the way down to the end user, upstream and downstream developers, and distributors.

So to focus the development of RPM in the right directions, the following five goals were adopted:

• Must work on different processor architectures.
• Simplified building of packages
• Install and Uninstall must be easy.
• All work has to start with the original source code
• Easy verification that packages installed correctly.

This simple set of five goals solidified RPM’s development and easily solved Red Hat’s package management woes very quickly. By 1998, RPM was the official package management system for Red Hat Linux.

RPM proved to be an effective package management system for the next several years. But during that time, RPM lost it’s way, along with Red Hat. As employees began to slip away, Red Hat was forced to take a less direct role, and more of a management roll in RPM’s development, allowing the project to drift off in directions they really didn’t want.

It wasn’t until Fedora became reality that they decided to wrest back control of RPM and retake command of it’s core development. But to do this, they had to take several steps back and fork RPM using the then currently available 4.4.2 codebase. While not the best move, as the code was several versions behind the latest work, it gave them a place to start that was as close to their ideal design as possible. It was also the base code used by both Red Hat and Novell at the time.

And forking a project in favor of improving the end product and fixing development woes isn’t all that bad a thing. Take for example the X project. At version 4.4, Xorg split away from Xfree86, the as then primary Xwindows system, and took development in a direction that was not only requested, but needed as well.

Since the Xfree86 developers dug in their heals and refused to move in the directions requested by the community, Xorg took over as the primary Xwindows system, which then saw Xfree86 tossed to the curb. As a result of that inflexibility and refusal to listen to the community, it’s now a dead project, and Xorg is screaming forward as the Xwindowing system of choice.

However, not all splits have to be detrimental to both parties. Take KDE over Gnome. KDE was the parent project of Gnome, and originally was the only true Desktop Environment in the entire FOSS world. But arguments and debates came up that caused some developers to split off and form Gnome, which in the grander scheme of things has been one of the best things to ever happen to either KDE or the FOSS world as a whole.

And sometimes forks don’t always work. Take Compiz for example. It at one time split into Beryl and Compiz over opposing ideologies and developer quarrellings centered around features that should be included in the eyecandy focused window manager. While both did fantastic by themselves, they eventually saw the need to merge, and shortly after doing so, began a fairly rapid downhill spiral towards death.

Sure, they’re still alive today, but unless something happens soon to change their course of progress, especially with KDE4’s Kwin, and eventually Gnome’s own answer to Compiz coming in the next version, they might soon find themselves as a footnote in history.

But so far, only good things have come of this fork in Red Hat’s RPM system. Not only did they regain control of it, but they’ve offered some much needed improvements in RPM that I, and many others like me, saw were needed. During the years when Red Hat didn’t have control over it (or at least direct control), I saw RPM going downhill as a package management system, so much so that it drove me from having anything to do with Red Hat at all.

And I know I wasn’t the only one. I even retreated to other distributions with old tried and true systems like source building, ports, debian and others partially because of that. There were other reasons, but that was one of the primary reasons. This is mostly because RPM became difficult to work with, unreliable, unpredictable at times, and downright painful to mess with.

However, if handed a system with RPM in it today, I’d have absolutely no issues work with it. RPM has really improved a lot since Red Hat forked it, and should continue to improve over time.

Another big change over time is the number and types of diagnostic tools. During the early days of Red Hat, performance was everything, and squeezing every ounce of processing power out of the limited number of processor cycles available at the time was key to success.

But as time went along, and resources got more plentiful, you began to see the number of actively running resource monitors slowly dropping off. The pictures in this article will show you the gradual decline of these on the desktop in any form of prominence. Eventually, by the time you get to the Fedora days, process and resource monitoring has been shoved to the fringes where it’s used only by those wishing to still squeeze every last ounce of performance out of their machines.

Another gradual, but intriguing change has been in the area of applications. When Red Hat first started out, a lot of applications were console based, including common productivity tools such as word processors and web browsers.

But not everything was text only. Some things, like audio players such as SteelWav, dominated the desktop. These provided the end user with a reasonable user experience without requiring them to become a geek master of the command line.

Same with Mozilla (the original Mozilla, not the modern version) for web browsing. These were great tools in their time, but even they weren’t impervious to the march of progress, as they were later replaced with XMMS and Netscape respectively.

Visual themes were also fairly spartan at first, relying on the older blocky style layouts and designs for buttons, windows and more. Over time however, they evolved into more and more advanced designs, throwing away the old blocky window designs for smoother, more elegant designs.

And Red Hat did not shy away from including these eye candy improvements in their various versions as they became available. This is likely because, a user is most comfortable in an environment that is eye pleasing. As such it reduces fatigue and thus makes the experience more enjoyable for the user. This in turn translates into increased loyalty and productivity.

But the thing is, despite all the amazing things Red Hat has put into their distribution, and later into Fedora, it’s all still about choice, and as such, you can choose to either stick with the designs given to you by default in Fedora (and RHEL) or you can change them to something more pleasing. That’s the joy of Open Source, as it’s about choice.

Say you don’t like something, then change it! We’ve seen lots of change in Red Hat and Fedora over the years, in terms of visual looks, feature sets, support and more, as the community has spoken and Red Hat has listened.

That is why I believe that Fedora is a great distribution with a great future, and is most certainly a hat with a history.

(Authors Note: This was originally written for Linux+ magazine, for their february issue, in celebration of Fedora.)