Zimbra Collaboration Suite 5.0.18 has been released

We are pleased to announce: Version 5.0.18 of the Zimbra Collaboration Suite.

Key Enhancements:
21489 – CalDAV: support for calendar delegation (iCal 4/Snow Leopard/zimbraPrefAppleIcalDelegationEnabled)
33583 – Reduced duration of maintenance mode in backups (Just during MySQL & Lucene, not LDAP & first round of blobs.)
19702 – Trash/junk lifetime based on time message was moved instead of recieved zimbraMailPurgeUseChangeDateForTrash
06112 – ZCO: out of office assistant
29489 – ZCO: run local rules automatically (If Outlook & Zimbra filter act on same message, Outlook rule priority. Recommend disabling ‘Clear Categories on Mail’ under Tools > Rules & Alerts)

Notable Fixes:
21750 – Resource & location reservation across domains
30217 – Prevent outlook from sending mail on behalf of another person within the organization
38112 – Creating appointments in iCal does not send the invite
38711 – Calendar view going blank
38288 – PST migration Outlook ’07 SP2 crash
36923 – Tagging of appointments in ZCO causes failure in blackberries
39002/3, 39027, & 37540 – No need to re-apply the July critical security patch.

(Further details on PMweb.)

Save data, save time, save yourself from anguish: Take a backup before upgrading. Backup and Restore Articles – Zimbra :: Wiki

5.0.18 Network Edition Release Notes

5.0.18 Network Edition Downloads

5.0.18 Open Source Edition: Release Notes & Downloads

Subscribe to the blog for the latest news. Hope you enjoy this release!
-The Zimbra Team

Source: Zimbra Forums

List of top open source BPM / workflow solution

Every organization has their very own distinct business processes which differentiates them from their competitors.

Some companies have predefined processes while some have processes which are defined by the employees themselves. Imagine what would happen if each customer support representative have their own way of managing a customer. Without a proper process in place, calls from customers can go unanswered and can be transferred endlessly.

Lately, with the help of advance web based solutions, business processes and workflows can be managed through business process management (BPM) solutions. These solutions can be used to easily create applications automate processes such as:

  • Change management
  • Quality control
  • Customer service
  • Claims management
  • Complaint management
  • Procurement

There are many BPM / Workflow solutions out there. The following are three open source BPM / Workflow solutions for you to evaluate before trying the proprietary ones.

intalio

Intalio is an open source business process platform built around the standards-based Eclipse STP BPMN modeler and Apache ODE BPEL engine, both originally contributed by Intalio.

Intalio Enterprise provides all the components required for the design, deployment, and management of the most complex business processes which includes

  • BRE
  • BAM
  • Portal
  • ESB
  • ECM

Intalio is available in several editions but what we’re most interested in is Intalio’s free community edition. This edition is made of two components, Intalio Designer and Intalio Server.

Intalio Designer allows one to model  the business level processes for the model to be eventually deployed to Intalio Server. Intalio Designer is the only tool currently available on the market that allows any BPMN model to be turned into fully executable BPEL processes without having to write any code.

Intalio Server is a high-performance process engine that can support the most complex business processes, deployed within mission-critical environments.

If your organization is planning to automate business processes, Intalio should be in your list of consideration.

process-maker

ProcessMaker is an open source business process management (BPM) and workflow software designed for small and mid-sized businesses (SMBs).

ProcessMaker is a user friendly solution to manage workflow effectively and efficiently .

Business users and process experts with no programming experience can design and run workflows, increase transparency, and radically reduce paperwork,  automate processes across systems, including human resources, finance, and operations.

With ProcessMaker you can easily create workflow maps, design custom forms, extract data from external data sources and many more key features to optimize workflow management and business operations.

One key advantage of ProcessMaker is the online library which provides many process templates for you to download and begin editing. Learning curve can also be reduced since you’re starting from one which is already readily built and tested. Some sample process templates include:

  • Credit card application
  • Expense report process
  • City district zoning request

Updated 12 April 2009: Read my initial review of ProcessMaker

cuteflow

Unlike Intalio and ProcessMaker, CuteFlow is a web based open source document circulation and workflow system.

Users are able to define “documents” which are send step by step to every station/user in a list. Imagine the scenario where a particular document is being sent to various parties for review and approval before it’s classified as a final document for submission purposes.

Cuteflow helps to automate the document circulation process within your office internal environment.

All operations like starting a workflow, tracking, workflow-definition or status observation can be done within a comfortable and easy to use webinterface.

Some key features of Cuteflow includes:

  • Free and Open Source!
  • Webbased User Interface
  • Integration of workflow documents in e-mail message
  • Unlimited amount of sender, fields, slots, receiver…)
  • Workflows can attach data and files
  • Flexible user management with substitutes

So there you go, some open source BPM / workflow solutions for your organization to begin automating business processes. One thing about automating a business process is to ensure that a process is already mature. To find out why, try reading Process maturity level as a principle factor for process automation (First Pillar). I do agree with many of the points stated. Just think about it. If a process is rather new and not fully tested, it’s bound to have some loopholes or problems in it. If there are real problems in an immature process, going ahead to automate it can speed up the problems. So do beware, while technology helps to speed things up, speeding up problems and loopholes can be disastrous.

Source: WarePrise

Recovering your Linux server with a Knoppix rescue disk

Among the many positive aspects of working with Linux, one is the excellent recovery methods. If your server doesn’t boot properly, you can still access everything on it using a recovery disk. Here you will learn how to do this using Knoppix. This article doesn’t focus on a particular version of Knoppix, and will work on almost all Linux distributions.Booting your server using a Knoppix rescue CD is easy. Just put the disk in your server’s optical drive and restart the server, next the Knoppix operating system starts loading automatically. But it doesn’t immediately give you access to the files on your hard drive. You have to mount all file systems on your server yourself — assuming you can still mount them. The procedure that is described in this article helps you in fixing boot problems that are not caused by file system errors. If your server’s file systems have errors that prevent them from being mounted, the procedure described in this article will help you find a solution, but there may be additional steps required.

Mounting the Linux file systems

To access the root file systems on your server using a Knoppix rescue CD, you’ll have to mount it. This is also true for other file systems on your server. When using a rescue system, you’ll have to mount the root directory on a temporary directory. Most distributions have a directory /mnt which exists for this purpose, so it’s a good idea to use it and mount your file system on it. But, there is a potential problem: most utilities assume that your configuration files are in a very specific directory; if your distribution is looking for /boot/grub/menu.lst for instance, the tools may be incapable of understanding that it is in /mnt/boot/grub/menu.lst instead. Therefore, you need to make sure that everything that is mounted on /mnt, is presented to the operating system as mounted directly in the / directory. The following procedure shows you how to do that.

  1. Boot your computer, using the Knoppix CD. You’ll see the Knoppix welcome screen next. From here, press Enter to start loading Knoppix.
  2. While loading, Knoppix will wait a while to show you all available languages. If you don’t select anything, English is started automatically. Once completely started, you’ll get access to the Knoppix desktop.
  3. To restore access to your server, you’ll need to open a terminal window from Knoppix. By default, after opening a terminal window you’ll get the access permissions of an ordinary user. To be able to repair your server, you need root permissions. You’ll get them using the sudo su command.
  4. Now use the mount command. This command shows you that currently no file systems are loaded at all, but everything you see is in a RAM drive.
  5. In case you don’t know exactly how storage in your server is organized, you’ll need to check what partitions and disks are used. The fdisk -l command gives a good start for that. This command shows you all disks that are available on your server (also if they are LUN’s offered by a SAN), and it will show you which partitions exist on these disks. The disk names typically start with /dev/sd (although other names may be used), and are followed by a letter. The first disk is /dev/sda, the second disk is /dev/sdb and so on. On the disks, you’ll find partitions that are numbered as well. For instance, /dev/sda1 is the first partition on the first disk on your server. Here is an example of what a typical disk layout may look like:Use fdisk -l to show the current disk layout of your server.
    ilulissat:/ # fdisk -l
    Disk /dev/sda: 8589 MB, 8589934592 bytes
    255 heads, 63 sectors/track, 1044 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14          30      136552+  82  Linux swap / Solaris
    /dev/sda3              31         553     4200997+  83  Linux
  6. Now it’s time to find out what exactly you are seeing. If it looks like the example above, it’s not too hard to find out which is the root file system. You can see that there are two partitions using partition type 83 (which means they contain a Linux file system). One of them however is only 12 cylinders, and as each cylinder is about 8 MB only, it’s too small to contain a root file system. The second partition is using partition type 82, so it contains a swap file system. Therefore, the only partition that can possibly contain the root file system, is /dev/sda3.
  7. Now that you know which partition contains the root file system, it’s time to mount it. As mentioned before, it’s a good idea to do that on the /mnt directory, Knoppix doesn’t use it for anything useful anyway. So in this case, the command to use would be mount /dev/sda3 /mnt
  8. A quick check should show you at this point that you have correctly mounted the root directory. Before you activate the chroot environment, you’ll need access to some system directories as well. Most important of them are /proc and /dev. These directories normally are created automatically when booting. That means they do exist in your Knoppix root directory, but once you’ve changed /mnt to become your new root directory, you’ll find them empty. As you really need /proc and /dev to fix your problems, mount them before doing anything else. The next two commands should help you mounting them.
    mount -o bind /dev /mnt/dev
    mount -t proc proc /mnt/proc
  9. Once you are at this point, your entire operating system is accessible from /mnt. You can verify this now, by activating the directory (use cd /mnt). At this point your prompt looks like root@Knoppix:/mnt#. Now use the command chroot . to make the current directory (.) your new root directory. This brings you to the real root of everything that is installed on your server’s hard drive.
  10. As Linux servers tend to use more than one partition, you may have to mount other partitions as well, before you can really fix all problems. If for instance the directory /usr is on another partition, you won’t be able to do anything before you have made that accessible as well. The only task to perform at this moment, is to find out which file system is mounted where exactly. There is however an easy answer to that question: /etc/fstab. In this file you’ll see exactly what is mounted when your server normally boots. So check the contents of /etc/fstab and perform all mounts defined in there manually. Or make yourself easy and use mount -a. This command will mount all file systems automatically which haven’t been mounted yet.

Now, you’ll have full access to all utilities on your server’s hard drive, and more important, to all files — time to analyze what went wrong and restore access. But make sure that you start by using a backup at this point!

To fix any problems on your computer, you have to make sure to restore full access to your system. You can do this by mounting all file systems on your computer, and after that by making them accessible by using the chroot command. This way you are ensured that all tools see the server’s file system as it really is, and that will make it a lot easier for you to restore access.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.

Fedora: A Hat with a History

fedora-logoFedora is a giant among giants, in the shadow of a giant from which it was born. But every giant is born of humble beginnings.

So to understand the giant, you first have to understand from where they came. So let me take you through a short history of Fedora, and show you where it all began, and some of the interesting, if not curious steps that it took to become what it is today.

To start with the very deepest roots, we need to look to the kernel that makes Fedora what it is: The Linux Kernel. That was first introduced in 1991 by a then college student named Linus Torvalds.

A short time later, the first Linux distros began appearing, starting with MCC Interim, which gave rise to SLS, which in turn was the grandparent to Slackware, the parent of numerous other major distros, including Suse Linux, and the uber geek distro of choice by many senior Linux users.

Red Hat, the parent of Fedora, was a little late to the game, as it didn’t get it’s official start until 1994. But just like Slackware, it quickly became the parent of numerous other distributions, including Caldera, Mandrake (later renamed to Mandriva), Red Flag, and even CentOS.

The original beta for Red Hat was released on July 29th, 1994. The official 1.0 release came out in May of 1995, and was interestingly enough, codenamed Mother’s Day. Whether or not it was actually released on Mother’s Day is a subject of fierce debate, but it is still an undeniably memorable achievement. The biggest reason for this is that Red Hat 1.0 actually beat Microsoft’s Windows 95 to market by almost 4 months, which is actually quite an achievement, considering initial development of Red Hat started after that for Windows 95.

After that, development was fast and furious, with four major versions, and a number of sub versions all being released within an amazing two and a half years. After that, the pace of development slowed into a more predictable, and methodical release schedule, with four more versions appearing gradually over the next five and a half years.

It was shortly after this period of relative quiet that the Fedora project was born. In march of 2003, Red Hat released version 9 of their Linux distribution, and shortly after began a move to migrate Red Hat development from an internal closed development (not closed source) system to an open development system. This gave rise to a separation of Red Hat Linux into Fedora Core and Red Hat Enterprise Linux (RHEL).

From mid 2003 to late 2006, Fedora Core continued forward, separate, yet still intimately connected to RHEL, with Fedora becoming more the step child of RHEL and the preferred distribution for SMB’s and home users, and RHEL remaining the undisputed choice for large enterprise. Then in late 2006, early 2007, a decision was made to shorten the name of Fedora Core to just plain old Fedora. Version 7 of Fedora Core released with the new, abbreviated name.

There has been three versions since the new name was adopted (7, 8 and 9), with the latest version having been released in May of 2008. Fedora 10 is already in the pipes and due to be released in 2009, and it will continue for Fedora what is a project that has been over fifteen years in the making.

But, despite it’s longevity, no distro is complete without a good desktop manager to go with it. For the people at Red Hat, and eventually Fedora, the choice was clear: The Gnome desktop would be just what the doctor ordered. Thus they adopted it as their desktop manager of choice. However, this didn’t occur until 1998, and even then was only offered as a choice for installation.

For anyone using Red Hat at the time, you typically had few choices in desktop environments, with KDE leading the way, and few reasons to want to use one, given that most people who used Red Hat didn’t need a graphical desktop. That’s because most users in 1998 were systems admins, and most Red Hat machines were servers.

But that didn’t stop the adoption of Gnome as the default desktop of choice for Red Hat. Fresh off the development presses, and still green around the ears, Gnome was first offered in Red Hat 5.1 as a preview release, but not officially added as a standard element of the distro until 1999 with the release of Red Hat 6.0.

Again, not many people were using Red Hat as a desktop OS at the time, however, the number had increased enough to warrant the inclusion of Gnome into the distribution. So why Gnome, and not the (at that time) more advanced KDE, or even one of the other window managers, such as AfterStep or TWM. The answer to that lies in the polish and feature set of each window manager.

While good, TWM, AfterStep and others were either too spartan, or lacked the polish that came with Gnome and KDE. Sure, they weren’t perfect themselves by any current standards, but they were a lot farther along than most other window managers or desktop environments.

Once the choice was made, Gnome continued to grow, actually gaining a big boost in developers, support, and even acceptance among the Linux world because of this, growing into the giant it is today. And all along the way, Red Hat, and later Fedora, have stayed strong with Gnome, not wavering from their dedication to it all the way through.

One interesting fact though about the relationship between Red Hat and Gnome, is that while Red Hat used and preferred Gnome as the primary desktop environment of choice for their distribution, they also included KDE as well, and offered it as an alternative for those who wanted Red Hat, but didn’t want the Gnome desktop. They still encouraged users to use and stick with the Gnome desktop, however they didn’t want to shut out anyone if possible, and thus included both.

When the Fedora project was born in 2003, Gnome 2.2 was already available and firmly entrenched in Red Hat, and thus has saved the Fedora developers the interesting experience of moving between major versions of a Window manager, even though that will come eventually in the not too distant future.

Overall, I myself prefer the KDE desktop environment, however, I’m also a firm believer in choosing what you want, and tweaking a distribution your way to meet your needs, therefore I’m quite intrigued and pleased to see Red Hat, and now Fedora, so openly supporting choices such as this in their distribution. Because it’s one thing to say you support Open Source, and something else to actually show it through your actions.

Because Open Source is about choice, and freedom, and the Fedora project (and Red Hat) have both done that in both big and small ways. That is why I believe that Fedora is a great distribution with a great future, and is most certainly a hat with a history.

Another interesting tidbit of history that actually spawned from Red Hat is the RPM package system. RPM stands for Red hat Package Manager. Although it’s not called that anymore, that is the source of it’s name. RPM is both a package management system under Red Hat (and later Fedora) as well as a software package format.

Essentially, the RPM file format is somewhat of a cross between an archive file, similar to Zip or Tar, and an installer file, as it contains both multiple files packaged together as a single file (with the extension .rpm), as well as the necessary information required to install and configure the included files.

The RPM package manager is actually a command line tool designed to take RPM files, unpack them, and then follow the included instructions to install everything to it’s proper location. This essentially works in much the same way as other package systems, like deb, ports and others.

But the RPM package manager doesn’t just install software, it can also uninstall, reinstall, verify, query, and update software packages on the system. RPM is also the standard package management system of a wide variety of other operating systems, such as Suse, CentOS, Mandriva, and more, making it a core standard package management system in the Linux world. It’s also part of the Linux Standard Base, making it easy for other distributions to include at will.

But RPM wasn’t always the default package manager for Red Hat, and later Fedora. Back in it’s early days, prior to version 2.0, Red Hat used a system called RPP. It was a command line tool providing simple installation, powerful query features, and and package verification, all necessary tools for the then emerging Red Hat.

But despite all these great features, RPP was doomed to fail from the beginning, because it was too tightly designed around Red Hat, leading to issues which forced Red Hat to release numerous versions of the same package system in each release just to deal with these issues. This relative inflexibility, along with some other issues, eventually killed RPP shortly after version 1.0 of Red Hat.

To solve the problems of RPP, a new package manager called PM was created, taking the best of RPP, PMS (package management system), and several others, and bundled them together. That system failed to produce good results either.

That’s where RPM came in. With the experience of two failed package management systems behind them, Red Hat developers were able to create RPM, which later launched with version 2.0. While not yet perfect, it was a far cry better than RPP, and a welcome change.

But as with any change, good or bad, some standards need to be created to ensure that consistency is maintained throughout the development process, as everything that is done to a package management system affects everyone from those on top, all the way down to the end user, upstream and downstream developers, and distributors.

So to focus the development of RPM in the right directions, the following five goals were adopted:

• Must work on different processor architectures.
• Simplified building of packages
• Install and Uninstall must be easy.
• All work has to start with the original source code
• Easy verification that packages installed correctly.

This simple set of five goals solidified RPM’s development and easily solved Red Hat’s package management woes very quickly. By 1998, RPM was the official package management system for Red Hat Linux.

RPM proved to be an effective package management system for the next several years. But during that time, RPM lost it’s way, along with Red Hat. As employees began to slip away, Red Hat was forced to take a less direct role, and more of a management roll in RPM’s development, allowing the project to drift off in directions they really didn’t want.

It wasn’t until Fedora became reality that they decided to wrest back control of RPM and retake command of it’s core development. But to do this, they had to take several steps back and fork RPM using the then currently available 4.4.2 codebase. While not the best move, as the code was several versions behind the latest work, it gave them a place to start that was as close to their ideal design as possible. It was also the base code used by both Red Hat and Novell at the time.

And forking a project in favor of improving the end product and fixing development woes isn’t all that bad a thing. Take for example the X project. At version 4.4, Xorg split away from Xfree86, the as then primary Xwindows system, and took development in a direction that was not only requested, but needed as well.

Since the Xfree86 developers dug in their heals and refused to move in the directions requested by the community, Xorg took over as the primary Xwindows system, which then saw Xfree86 tossed to the curb. As a result of that inflexibility and refusal to listen to the community, it’s now a dead project, and Xorg is screaming forward as the Xwindowing system of choice.

However, not all splits have to be detrimental to both parties. Take KDE over Gnome. KDE was the parent project of Gnome, and originally was the only true Desktop Environment in the entire FOSS world. But arguments and debates came up that caused some developers to split off and form Gnome, which in the grander scheme of things has been one of the best things to ever happen to either KDE or the FOSS world as a whole.

And sometimes forks don’t always work. Take Compiz for example. It at one time split into Beryl and Compiz over opposing ideologies and developer quarrellings centered around features that should be included in the eyecandy focused window manager. While both did fantastic by themselves, they eventually saw the need to merge, and shortly after doing so, began a fairly rapid downhill spiral towards death.

Sure, they’re still alive today, but unless something happens soon to change their course of progress, especially with KDE4’s Kwin, and eventually Gnome’s own answer to Compiz coming in the next version, they might soon find themselves as a footnote in history.

But so far, only good things have come of this fork in Red Hat’s RPM system. Not only did they regain control of it, but they’ve offered some much needed improvements in RPM that I, and many others like me, saw were needed. During the years when Red Hat didn’t have control over it (or at least direct control), I saw RPM going downhill as a package management system, so much so that it drove me from having anything to do with Red Hat at all.

And I know I wasn’t the only one. I even retreated to other distributions with old tried and true systems like source building, ports, debian and others partially because of that. There were other reasons, but that was one of the primary reasons. This is mostly because RPM became difficult to work with, unreliable, unpredictable at times, and downright painful to mess with.

However, if handed a system with RPM in it today, I’d have absolutely no issues work with it. RPM has really improved a lot since Red Hat forked it, and should continue to improve over time.

Another big change over time is the number and types of diagnostic tools. During the early days of Red Hat, performance was everything, and squeezing every ounce of processing power out of the limited number of processor cycles available at the time was key to success.

But as time went along, and resources got more plentiful, you began to see the number of actively running resource monitors slowly dropping off. The pictures in this article will show you the gradual decline of these on the desktop in any form of prominence. Eventually, by the time you get to the Fedora days, process and resource monitoring has been shoved to the fringes where it’s used only by those wishing to still squeeze every last ounce of performance out of their machines.

Another gradual, but intriguing change has been in the area of applications. When Red Hat first started out, a lot of applications were console based, including common productivity tools such as word processors and web browsers.

But not everything was text only. Some things, like audio players such as SteelWav, dominated the desktop. These provided the end user with a reasonable user experience without requiring them to become a geek master of the command line.

Same with Mozilla (the original Mozilla, not the modern version) for web browsing. These were great tools in their time, but even they weren’t impervious to the march of progress, as they were later replaced with XMMS and Netscape respectively.

Visual themes were also fairly spartan at first, relying on the older blocky style layouts and designs for buttons, windows and more. Over time however, they evolved into more and more advanced designs, throwing away the old blocky window designs for smoother, more elegant designs.

And Red Hat did not shy away from including these eye candy improvements in their various versions as they became available. This is likely because, a user is most comfortable in an environment that is eye pleasing. As such it reduces fatigue and thus makes the experience more enjoyable for the user. This in turn translates into increased loyalty and productivity.

But the thing is, despite all the amazing things Red Hat has put into their distribution, and later into Fedora, it’s all still about choice, and as such, you can choose to either stick with the designs given to you by default in Fedora (and RHEL) or you can change them to something more pleasing. That’s the joy of Open Source, as it’s about choice.

Say you don’t like something, then change it! We’ve seen lots of change in Red Hat and Fedora over the years, in terms of visual looks, feature sets, support and more, as the community has spoken and Red Hat has listened.

That is why I believe that Fedora is a great distribution with a great future, and is most certainly a hat with a history.

(Authors Note: This was originally written for Linux+ magazine, for their february issue, in celebration of Fedora.)

Khách hàng chưa phải là thượng đế

Khi một nhân viên mới đến, Chủ tịch một Tập đoàn hàng đầu thế giới với doanh thu trên 6 tỷ USD mỗi đã tận tay rót trà mời.

Đích thân ông giới thiệu cho nhân viên mới về công ty, về quan điểm kinh doanh và những điều mà ông mong đợi ở họ. Đồng thời ông cũng chỉ ra nguyên tắc riêng của doanh nghiệp và chế độ đãi ngộ đối với từng nhân viên trong công ty.

Vị lãnh đạo ấy là Hal Rosenbluth, người đã biến Rosenbluth International từ một công ty gia đình nhỏ thành Tập đoàn du lịch có trên 5.300 nhân viên, gần 1.000 văn phòng ở tất cả các bang của nước Mỹ và 53 quốc gia trên thế giới, với doanh số hàng năm đạt hơn 6 tỷ USD. Tỷ lệ tăng trưởng là 1 tỷ USD mỗi năm.

Ông cũng chính là một trong 2 tác giả của cuốn sách “Khách hàng chưa phải là thượng đế”. Cuốn sách là bí quyết thành công của Rosenbluth International khi đặt nhân viên lên hàng đầu. Triết lý kinh doanh có vẻ hơi ngược đời nhưng lại được ông vận dụng thành công đưa Rosenbluth International ngày tập đoàn hàng đầu trong lĩnh vực du lịch tại Mỹ.

Rosenbluth cho biết, ông đặt niềm tin tuyệt đối vào tầm quan trọng của hạnh phúc nơi công sở, đó chắc chắn là yếu tố các chủ chốt để có dịch vụ tốt. Tất nhiên, khách hàng chính là lý do để công ty tồn tại, nhưng để phục vụ khách hàng tốt nhất, thì phải coi trọng nhân viên trước đã, bởi nhân viên là những người phục vụ khách hàng, sự phục vụ cao nhất chỉ đạt được khi xuất phát từ con tim. Vậy nên, công ty nào có được trái tim của nhân viên, công ty đó sẽ có dịch vụ tuyệt vời nhất.

Điều này cũng giống với câu chuyện cổ về “Chiếc xe độc mã”. Hãy tạm coi nhân viên đại diện cho con ngựa, nếu chủ doanh nghiệp cho khách hàng lên xe rồi đặt chiếc xe lên phía trước con ngựa, khách hàng sẽ chẳng đi được bao xa. Dù doanh nghiệp có đãi khách hàng bằng champagne và trứng cá muối trên xe, thì họ cũng chẳng thể nhích đi được một bước, chừng nào những con ngựa còn ở đằng sau họ…

Các công ty chỉ tự đánh lừa bản thân mình khi tin rằng: “Khách hàng là thượng đế”. Nhân viên nếu không coi khách hàng là số một và họ chắc chắn không làm điều này chỉ vì lãnh đạo của họ kỳ vọng như vậy. Chỉ khi nhân viên nhận thấy mình quan trọng như thế nào, thì họ mới chân thành chia sẻ cảm giác của mình với người khác. Từ triết lý kinh doanh này, Hal Rosenbluth đã xây dựng được một công ty có tỷ lệ giữ khách hàng là 98% – một con số kỷ lục.

Lâu nay, các doanh nghiệp luôn đặt ra cho mình một triết lý kinh doanh rằng “khách hàng là thượng đế”, song trên thực tế, việc thực hiện được điều này lại không dễ dàng gì. Tại cuộc tọa đàm mới đây về mối quan hệ giữa doanh nghiệp và khách hàng, Công ty Thái Hà Books đã đưa ra vấn đề trái ngược hẳn với những gì mà các ông chủ doanh nghiệp hay đề cập là “Khách hàng chưa phải là thượng đế”.

Cái lý được đưa ra là, nếu công ty tạo cho nhân viên cảm giác căng thẳng, lo sợ và thất vọng thường xuyên, họ sẽ mang những cảm giác ấy về nhà, tạo ra những căng thẳng trong gia đình và họ sẽ lại mang tâm trạng ấy đến công ty ngày hôm sau, rồi hôm sau nữa, vòng quay ấy lặp đi lặp lại… Dần dần, sự đam mê, nhiệt tình trong công việc sẽ bị thay thế bằng nỗi lo sợ, trách nhiệm phải hoàn thành công việc luôn đè nặng trên vai. Hậu quả xấu hơn là khi nhân viên của bạn đến gặp khách hàng bằng chính tâm trạng bất ổn ấy. Lúc đó, khách hàng sẽ là người lãnh sự khó chịu, bực bội ấy và họ sẽ chẳng thể là thượng đế theo đúng những gì mà ông chủ doanh nghiệp mong muốn.

Ông chủ tập đoàn Rosenbluth International – Rosenbluth đã tạo ra môi trường làm việc vui vẻ và hiệu quả cùng các nhân viên thích hợp với từng vị trí, phục vụ khách hàng hết lòng từ chính con tim… Rusendbluth coi dịch vụ là sự kết hợp của 3 yếu tố Thái độ, Nghệ thuật và Quy trình, tạo điều kiện để mỗi nhân viên thể hiện hết khả năng của mình, luôn khuyến khích tạo dựng và nuôi dưỡng các ý tưởng, tìm kiếm cơ hội, sử dụng công nghệ để giải phóng nhân viên, giúp họ có thể sáng tạo và tập trung chăm sóc khách hàng tốt hơn.

Hơn mười năm qua, các chiến lược và ý tưởng của ông đã tạo động lực thúc đẩy cho rất nhiều CEO, các nhà lãnh đạo, quản lý doanh nghiệp, các chuyên gia thực hiện, trong đó có Jeff Greenfield, Scott McNealy… Những bí quyết này ngày càng tỏ ra hiệu quả khi Rosenbluth International vẫn giữ vị trí là tập đoàn dịch vụ du lịch hàng đầu ở Mỹ kể từ sau thảm họa khủng bố 11/9.

Ubuntu 9.04 vs. Fedora 11 Performance

Fedora 11 was released earlier this week so we have set out to see how its desktop performance compares to that of Ubuntu 9.04, which was released back in April. Using the Phoronix Test Suite we compared these two leading Linux distributions in tasks like code compilation, Apache web server performance, audio/video encoding, multi-processing, ray-tracing, computational biology, various disk tasks, graphics manipulation, encryption, chess AI, image conversion, database, and other tests.

For this testing our system we used was an Intel Core 2 Duo E8400 clocked at 4.00GHz, ASUS P5E64 WS Professional motherboard, 2GB of DDR3 memory, a Western Digital 160GB WD1600JS-00M SATA hard drive, and a NVIDIA GeForce 9800GT graphics card. Ubuntu 9.04 ships with the Linux 2.6.28 kernel, GNOME 2.26.1, X Server 1.6.0, GCC 4.3.3, and an EXT3 file-system by default. Fedora 11 was using the Linux 2.6.29 kernel, GNOME 2.26.1, X Server 1.6.2 RC1, xf86-video-nouveau 0.0.10, GCC 4.4.0, and an EXT4 file-system by default. The x86_64 builds of both Fedora 11 and Ubuntu 9.04 were used.

We were using the latest Phoronix Test Suite code for managing our testing process, which will go on to form the 2.0 Sandtorg release. Older versions of our testing software are available in the Fedora and Ubuntu repositories. The test profiles we used included timed PHP compilation, Apache benchmarking, LAME MP3 encoding, Ogg encoding, FFmpeg, GMPbench, Bwfirt, C-Ray, timed MAFFT alignment, Threaded I/O Tester, PostMark, Dbench, GraphicsMagick, OpenSSL, Crafty, Sunflow Rendering System, dcraw, Minion, SQLite, and PostgreSQL pgbench.

When measuring how long it took to build out PHP 5.2.9 on each distribution, it was faster on Ubuntu by about four seconds. Ubuntu 9.04 is using the older GCC 4.3 branch while Fedora 11 is using the newest GCC 4.4 series.

Ubuntu 9.04 had really smacked Fedora 11 when it came to the Apache Benchmark with the static web page serving performance. Ubuntu was able to sustain more than 58% more requests per second than Fedora 11.

At encoding an MP3 file using LAME, the lead was in Fedora’s favor but by just about 4%.

When encoding an Ogg file it was more favorable on Ubuntu, but by a barely significant difference.

The time it took to encode an AVI to NTSC VCD using FFmpeg 0.5 was essentially dead even between the 64-bit versions of Ubuntu 9.04 and Fedora 11.

The GMPbench performance was very close between Ubuntu and Fedora.

With Bwfirt ray-tracing the performance was indifferent between Fedora and Ubuntu.

With C-Ray, which is supposed to be a very simple ray-tracing engine, the lead was in Ubuntu’s favor by about 12% — 110 seconds versus 125 seconds for Ubuntu and Fedora, respectively.

The timed MAFFT multiple sequence alignment was very close between Ubuntu 9.04 and Fedora 11.

Fedora 11 did significantly better than Ubuntu 9.04 when it came to 64MB writes with 32 threads using the Threaded I/O Tester to benchmark the Serial ATA disk under Ubuntu and Fedora. Ubuntu 9.04 had a significantly higher latency. This large difference is likely due to performance improvements found in the Linux 2.6.29 kernel, which Fedora uses, as well as the EXT4 file-system.

While the 64MB writes were faster under Fedora, the 64MB reads were better with Ubuntu.

NetApp’s PostMark had more transactions per second under Fedora 11 compared to Ubuntu 9.04 — a 38% difference.

Fedora 11 continued to perform much better at the disk tests when it came to Dbench too with twelve clients. Fedora 11 was nearly four times faster than Ubuntu 9.04!

Turning to the OpenMP-based GraphicsMagick for looking at the image manipulation performance, Ubuntu 9.04 was slightly faster. Fedora 11 had averaged 171 transactions per second while Ubuntu 9.04 was at 179 transactions per second, or about 5% faster, with the HWB color space conversion.

With local adaptive thresholding in GraphicsMagick, the results were identical between the Canonical and Red Hat operating systems.

The OpenSSL performance was also very close between Fedora and Ubuntu.

Crafty, which is an open-source chess engine, performed about the same under the latest stable releases of Ubuntu and Fedora.

The Java-based Sunflow Rendering System also had nearly identical results.

More similar results… This time with dcraw as we measured how long it took to convert several files from RAW format to PPM files.

The Minion constraint solver with the Solitaire benchmark also was close between Ubuntu 9.04 and Fedora 11.

Fedora 11 did much better than Ubuntu 9.04 with the SQLite performance. This large difference is explained by a serious kernel regression we previously reported on several occasions in the past, but after being present in the kernel for several releases, it was finally fixed with the Linux 2.6.29 kernel. Ubuntu 9.04 with its Linux 2.6.28 kernel is still impacted by this SQLite regression, but should be fixed in Ubuntu 9.10 unless the regression reappears. Fedora 11 is also using EXT4 by default while Canonical is finally moving to this updated file-system with Ubuntu 9.10.

Fedora 11 not only did better with its SQLite database performance, but PostgreSQL ran much faster too under the operating system that’s codenamed Leonidas.

In a number of the benchmarks the results were close, but in a few areas there are some major performance differences. In particular, with the test profiles that stress the system disk, Fedora 11 generally did much better — in part due to the EXT4 file-system and newer Linux kernel. Fedora also did much better with the database tests like SQLite and PostgreSQL. Ubuntu 9.04 though had done a better job with the Apache Benchmark and C-Ray. You can run your own benchmarks and compare these results using the Phoronix Test Suite.

Configuring YUM on Linux

Last time we visited Yellow Dog Updater Modified (YUM) in 2007 we created a repository and also configured access to repositories in RHEL5. In this, our second look at YUM – we’ll configure YUM by using its main configuration file, yum.conf, which resides in /etc. We’ll also take you through some basic yum commands that should be a part of your repertoire.For any yum newbie’s, a quick definition and a look at history. YUM is a package manager (an installer and remover) for RPM systems. It is tailor made to update groups of machines without having to update each specific RPM. The software locates and obtains the correct RPM packages from repositories, freeing you from having to manually find and install new applications and/or updates. The beauty of YUM is in its simplicity. You can use a single command to update all system software. Way back when, RHEL4 used to use up2date as its package manager – RHEL5 uses YUM, based on version 3. Upd2date is actually used as a wrapper around YUM in RHEL5. The product was developed by Seth Vidal (who now works for Red Hat) and a group of volunteer programmers and coded in Python. It is now up to version 3.2.23.

yum.conf
The file itself is made up of two sections. The first is the main section and the second is the repository section. You have a choice to put your repositories in this file or in separate files named file.repo. You can have more than one repository in one configuration file but there can be only one main section. Here is an example of a yum config file.

main]
cachedir=/var/cache/yum
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=redhat-release
tolerant=1
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
metadata_expire=1800
timeout=10

[myexamplerepo]
name=RHEL 5 $releasever - $basearch
baseurl=http://local/path/to/myyum/repository/
enabled=1

Let’s describe some important fields:

Cachedir: The directory where yum stores its cache and database files.

Keepcache: There are two choices here, 0 and 1. 1 informs yum to keep the cache of headers and packages after a successful install. The default is 1.

Tolerant: There are two choices here, 0 and 1. Setting this as 1 allows yum to be tolerant of errors on the command line. The default is 0.

Grpcheck: There are two choices here, 0 and 1. 1 enables GPG checking, which provides for GPG signature checking on packages on all repositories. This includes local package installation.

Metadata_expire: This is the time in seconds – after which the metadata will expire.

Timeout: This is the number of seconds that one would wait for a connection prior to timing out.

Exactarch: There are two choices here, 0 and 1. 1 tells YUM to update only the architectures of installed packages. For example, if you have this enabled you won’t be able to install an i386 package to update an i686 package.

Obsoletes: This effects updates only and enables the processing logic of YUM. It’s particularly useful when doing distribution level upgrades.

The repository section has the information required to find packages during package installation, updating and dependency installations. The mandatory field descriptions are as follows:

ID: a unique single word string which is the identifier of the repository.

Name: The string that describes the repository.

Baseurl: The url where the actual repository is housed.

Some optional fields include; gpgcheck, gpgkey, exclude and include. The exclude and include fields are similar to the ones used in the main section of the file, but apply only to a specific repository.

If you are shy about manually editing config files, than I would be extremely careful before doing so. Better to use some GUI software to help you configure YUM, than mess up a currently running YUM-based system. If you are going to edit these files manually, make sure you first play around with them in a test environment and/or have good backups. It just takes five seconds to issue the following command:

# cp /etc/yum.conf  /etc/yum.conf.old

Please use this command prior to manually editing this file.

yummy commands
YUM has dozens of commands that are part of its system. Try to learn some of the key commands that you’ll be using routinely. In this section we’ll discuss some of these commands.

# yum list

This lists out all packages in all repositories which are installed on the system. There are a variety of options with this command. One option is:

# yum list installed

This is similar to running an rpm –qa, which breaks down a list of all installed packages. By default yum list without any options will list all packages in all the repositories, and all the packages installed on your system. (Note: “yum list all” and “yum list” give the same output.)

# yum info

Displays information about any package – either installed or available.

# yum search

Allows you to search for information from metadata available, about packages.

# yum clean

The yum clean command allows you to clean up the cached files of metadata and packages which YUM uses during its normal operations. This will clear up a lot of disk space.

# yum groupinfo groupname

This provides you with detailed information for each group including description, mandatory, default and optional packages.

We’ve focused on using YUM on RHEL, but it should also be noted that SLES10.1 has added support for YUM repositories in YaST. Many other distributions also provide YUM support – though some do not, so if YUM is really important to you, you should check your documentation carefully. Finally, as those of you who have already worked with YUM realize, it has a command line utility only. If you like GUI software there are also several GUI utilities which interface with YUM, including pup, pirut (the default Fedora GUI as of version 5), and Yum Extender. YUM may not be rocket science to use, but you will need to take some time to learn it properly. Like any other new software, the more time you spend learning it and playing with it in a sandbox the less time you will spend in having to fix it later.

ABOUT THE AUTHOR: Ken Milberg is a systems consultant with two decades of experience working with Unix and Linux systems. He is a SearchEnterpriseLinux.com Ask the Experts advisor and columnist.

Inbox Innovation: Zimbra Adds New Gadgets and Gallery

Zimbra’s open source roots have always been of great importance to both the company and the Zimbra Collaboration Suite (ZCS).  When we set out to build a new collaboration system over five years ago, we wanted to bring a fresh perspective to the market, and a big part of that was our commitment to being open source.  We understood sharing ideas within the open source community keeps you a one step ahead of competitors by iterating faster to give users what they want.  Process-maker-in-ZimbraA great example demonstrating how the community has flourished is the Zimlet development program.

Zimlets are simple but powerful extensions of ZCS that connect users’ email, calendar, and contacts with any number of outside services (for a couple of recent examples see Alfresco and Peru and TripIt).  Zimlet development growth in the community has been strong and steady, and we are excited to continue supporting the community’s work by providing a place where developers can feature the best of their integrations to share with other Zimbra users.   So, today we are launching an updated Zimlet Gallery where you can pick and choose from many handy new ZCS extensions.

At the same time, we also love seeing our Yahoo! friends continue to embrace openness as part of the Yahoo! Open Strategy. In addition to this announcement today, a number of our Yahoo! brethren are extending their platforms to become more open. Today, Yahoo! Mail is introducing applications which enable people to make online payments, access personal photos and more easily send large files directly from their inbox. In addition, My Yahoo! is adding even more third-party applications, driving enhanced personal productivity for users directly from their My Yahoo! start page. You can read more about the Mail and My Yahoo! updates on the Yodel and YDN blogs.

As part of the Zimlet Gallery launch today, we’d like to introduce you to a few new third-party Zimlets, including:

Xythos Zimlet – The Xythos Zimlet allows you to drag and drop email messages and file attachments directly into Xythos’ Enterprise Document Management System.  Secure document management is popular in the enterprise and universities; integration in email is key for ubiquitous adoption.

Processmaker Zimlet – The Processmaker Zimlet helps streamline workflows, like time-off requests, all within Zimbra email (see above).  This Zimlet is already becoming popular and is being deployed at Access America Transport and Ministerio de Vivienda by our Zimbra Partners.

Sticky-Notes-in-ZimbraIn addition, Zimbra developers have created a handful of new Zimlets, including:

Place Sticky Notes on Email – The new Sticky Notes Zimlet allows you to attach and tag emails with “notes.” One can leave comments, reminders, additional info about the email and more. And Zimbra’s powerful search can search through emails based on the contents of the tags/notes attached to the email.

Email Highlighter – The Colored Emails Zimlet allows you to apply personally assigned colors to emails from specific senders such as a family member, your boss, etc. You can identify senders by color, but you can also create colored emails through tags, making it easier to prioritize any inbox.

Save Email as Documents – With one click, the Email-2-Doc Zimlet lets you save an important email as a Zimbra Document; it will automatically save any attachments as links in the Document as well. The email can then be edited and shared with others.

Một số tính năng mới trong Fedora 11 – Leonidas

Tăng tốc độ khởi động

Fedora 11 khởi động hệ thống trong vòng 20 giây hoặc nhanh hơn bằng việc ứng dụng kernel mode setting (KMS).

Tự động cài đặt font và kiểu file (mime)

Chức năng nãy rất hữu dụng cho người dùng mới bắt đầu làm quen với Linux, thậm chí là mới làm quen với máy tính. Fedora 11 sẽ tự động giúp người sử dụng cài đặt các trình ứng dụng, fonts, multimedia codecs và ảnh clipart chỉ với vài cú nhấn chuột. Khi bạn cố chơi một bản nhạc, xem một đoạn video, mở một file không rõ loại hoặc sử dụng một font chưa cài đặt, Fedora 11 sẽ bật một hộp thoại để thông báo và hỏi xem bạn có muốn cài thêm vào hay không. Tính năng này có được nhờ ứng dụng công nghệ Enhancements trong PackageKit, trình quản lý gói phần mềm đa dụng trong Linux (hỗ trợ nhiều phân bản Linux).

fedora11-1

fedora_11_2

Trình quản lý âm thanh mềm dẻo và dễ sử dụng hơn

Nhằm mục đích làm cho việc quản lý âm thanh trở nên thân thiện hơn, Fedora 11 giới thiệu giao diện trình điều khiển âm thanh hoàn toàn mới.

Cập nhật hoàn toàn trình quản lý gói phần mềm RPM

RPM 4.7 đã cảnh tiến rất nhiều so với các phiên bản cũ trước đó về tốc độ. Việc sử dụng bộ nhớ đã được tối ưu hóa rất nhiều. Trong một kiểm tra “cài đặt tất cả các gói”, Fedora 10 sử dụng gần 1.5GB bộ nhớ, trong khi Fedora 11 chỉ cần khoảng hơn 300MB.

Cải tiến việc quản lý sử dụng năng lượng

Người sử dụng máy xách tay và netbook có thêm lý do tốt để nâng cấp và/hoặc chuyển sang sử dụng Fedora 11 vì những cải tiến rất quan trọng trong việc quản lý sử dụng năng lượng. Với công cụ đo lường, theo dõi hoàn toàn mới, bạn có thể xem, thiết đặt các ứng dụng sẽ bật/tắt tương ứng với các mức tiết kiệm năng lượng.

Hệ thống file EXT4

Fedora 11 ngầm định sử dụng hệ thống file EXT4.

Trình dịch ứng dụng cho Windows

Fedora 11 hỗ trợ trình dịch các ứng dụng cho Windows trực tiếp, sử dụng môi trường MinGW.

tours_fedora11_036_apps1

Các tính năng mới khác có thể kể đến: “Archer” Gdb development branch for C++ và Python, công cụ tự động thông báo lỗi, và cải tiến về trình hỗ trợ nhận dạng vân tay (dùng cho login thay mật khẩu thường và nhiều ứng dụng khác).

Các ứng dụng chính được cập nhật, nâng cấp bao gồm: Gnome 2.26, KDE 4.2, Xfce 4.6, GCC 4.4, NetBeans IDE 6.5, Python 2.6, Thunderbird 3, and Firefox 3.1.