Top Five Linux Deployment Mistakes

The days when Linux is an unknown quantity in a business are largely over — but that doesn’t mean that every organization has tons of experience deploying Linux. Even if your organization has deployed Linux before, there are some common mistakes to be aware of. Here’s five things you need to watch for when planning a new Linux deployment.

Too Much, Too Fast

Any deployment should start with a small test deployment if at all possible. Whether you’re using Linux as a server or desktop/workstation system, or both, trying to roll out Linux to the entire organization (unless it’s a very small shop) can be a recipe for trouble.

For mission critical server systems, you need to make sure that you can handle peak loads and ensure uptime. This means doing extensive load testing before you deploy Linux servers to see whether you need heftier hardware, configuration changes, etc.

For user systems, you need to make sure that there are no unpleasant surprises when the systems are put in front of real users who aren’t already Linux experts. A Linux desktop may seem simple as pie for the IT department, but even a minor interface change can befuddle less experienced users.

Start small, and find out what (if anything) needs to be changed before going for a full deployment.

Interoperability Hazards

As much as some might prefer otherwise, it’s a Windows-friendly world out there. The good news is that Linux plays well with others. The bad news is that Windows doesn’t.

Part of any test deployment should not only focus on the new systems, but the interaction between the new and old. Can your Windows users access the Linux systems seamlessly? If you’re deploying Linux desktops, can you get to Windows file shares or make appropriate use of other user-facing services that have traditionally been accessed via Windows?

Any situation that calls for two or more operating systems to be in the same environment means you need to worry about interoperability. That leads to the next common mistake…

Authentication Silos

The new Linux systems are running great, users love them, management is thrilled — except for the part about having to maintain one set of user credentials for Windows systems under Active Directory, and another set of credentials for Linux.

This goes along with interoperability — make sure that your organization can deliver Single Sign-On (SSO). Whether that means setting up LDAP for the entire organization, or configuring your Linux systems to work with Microsoft’s Active Directory, don’t burden your users with two (or more) sets of credentials to do their job.

Out with the Old, In with the New

You’ve heard the phrase “if it’s not broken, don’t fix it”? That really applies when it comes to IT projects. One of the biggest mistakes that IT can make when it comes to Linux (or, honestly, any solution) is to let enthusiasm for a new platform or solution lead to unnecessary disruption.

In this case, that means replacing existing infrastructure that works with something new. Many organizations suffer with Microsoft Exchange, for example. If users and management want to kick Exchange to the curb, then do so — but if the users in the organization are by and large happy with Exchange, then leave it in place unless there’s a compelling reason to move to something new.

This doesn’t mean that your organization should be trapped on a legacy platform forever, of course. At some point, support for Windows XP ends. At some point, you have to replace the legacy UNIX systems that go out of support. That’s typically the best time to make a break, but rip and replace just to deploy a Linux/FOSS solution — when there’s a well-functioning system in place — is a bad idea.

Document, Document, Document

It’s not enough to deploy a solid solution that you, or the existing team, understand. You have to plan for vacations, career changes, and other contingencies that mean someone else will have to administer the systems you’ve tended.

This means that when systems are set up, you need to document what you’ve done, how you’ve done it, etc. If you’ve had the experience of trying to maintain legacy systems that will be replaced with Linux, you’ve no doubt faced installations that are baffling because the previous admin or admin team set up a custom system — with absolutely no or very minimal documentation.

Make sure that part of any Linux deployment (or any deployment, really) is adequate documentation of the system from top to bottom. Anything that a replacement admin would need to know to update the system, add users, configure services, etc. should be well-documented. If this is documented elsewhere (e.g. online documentation from the vendor) make sure that you have pointers to the external docs. Better yet, make a local copy in case the online docs change, are moved, or disappear altogether.


Most of the above should be common sense — yet many organizations make some, if not all, of these mistakes when deploying Linux. With a bit of careful planning, though, you can avoid all of these.

Have other deployment mistakes we should be aware of? Let us know in the comments!