The "No Network is 100% Secure" series
- IT Best Practices -
A White Paper


All rights reserved - may not be copied without permission
Easyrider LAN Pro, NOC Design Consultants

Contact Us

What are "best practices"?: The use and application of so-called "best practices" is a frequently misunderstood concept. Many IT managers use the term but not all understand what the "best practice" methodology is all about. There is general agreement that "best practices" are good, though. Some history:

Back in the early days, IT was like the old wild west... Systems Administrators were often geeky autocrats who set arbitrary rules and procedures (or had none at all) and pretty much ran the computing environment however they saw fit. Sometimes this worked out fairly well. And other times you were tripping over patch cords in the data closet because the network guy never heard of cable management. This is level zero (chaos mode) in the Information Technology Service Management (ITSM) model.

Much of IT best practices has been borrowed from well defined software development standards and processes. These methodologies have been enhanced to include server, switch/router and user management, to name just a few. Common best practices include:

Iterative processing - A repetitive methodology, where changes progress in incremental stages. This helps to maintain a focus on manageable tasks and ensures that earlier stages are successful before the later stages are attempted.

Requirements management - addresses the problem of creeping requirements, which is a situation in which Users/Management requests additional changes to a service or project that are beyond the scope and/or budget of what was originally planned. To guard against this common phenomenon, requirement management employs strategies such as documentation of requirements, peer reviews, sign-offs, and methodologies such as use case (project goal) descriptions.

Change control - a strategy that seeks to closely monitor changes throughout the iterative process to ensure that unacceptable and/or unexpected changes are not made.

Quality control - a strategy that defines objective measures for assessing quality throughout the process in terms of service delivery functionality, reliability, and performance.

Why best practices are not more widely adopted - the three main barriers to adoption of a best practice are a lack of knowledge about current best practices, a lack of motivation to make changes involved in their adoption, and a lack of knowledge and skills required to do so.


Sarbanes-Oxley, HIPPA, etc. compliance

Easyrider LAN Pro offers security and best practice seminars in conjunction with our Partner, Tektal LLC. The question arose: Do compliance frameworks like Sarbanes-Oxley, HIPPA and others establish guidelines for network security that touch upon points addressed in the seminar series? In other words, if a manager sees our seminar invitation and says to themselves "I know I am secure because we are SOX compliant (having been through an audit), therefore I do not need to attend this seminar"; is this in fact true?

The short answer is: In my opinion, there is no such thing as a "100% secure" data center or computing enterprise. Even DoD and DoE sites have to deal with security issues, vulnerabilities and potential exploits on a regular basis. Yet these sites are often extremely well hardened and are usually tightly managed. So my answer would be that any manager who believes that their network is 100% secure is probably more in need of attending our seminar than they realize.

This statement is in no way intended to minimize the value of obtaining any compliance audit certification, including SOX. Sarbanes-Oxley and HIPPA are legislative initiatives created by the US Congress, aimed essentially at protecting patient data and other sensitive or proprietary information. Also financial disclosures that relate to Securities and Exchange Commission (SEC) matters involving publically traded company stock values. A good deal of these regulations have to do with ethical standards, policies, procedures, disclosures, reporting requirements and so on rather than with data integrity per se. In fact, none of the Sarbanes-Oxley 11 titles that describe specific mandates and requirements for financial reporting have anything at all to do with computing enterprise security or best practices.

It would certainly be reasonable to assume that a data center that successfully completed one of these audits may well be at least somewhat more secure than a site that didn't. But I hope by now, the alert white paper reader has observed that concepts such as "secure" are relative and arbitrary to say the least. The quest for enterprise security and optimal service delivery performance is a never ending journey. Some IT organizations are well along on the path. Others have barely taken the first steps. However.... If you attend our free seminar and do not feel that you have learned anything of value, we do offer a money back guarantee :)


IT Service Management Best Practices

Goal - To mature the IT methodology so that service delivery is no longer provided in an ad hoc, unpredictable and undocumented manner. Essentially, to move the organization out of chaos mode. Maturity steps:

Reactive mode - IT organization operates in "fire fighting" mode, reacting to outages and asynchronous complaints by users. This is essentially a "best effort" service delivery scheme. There may or may not be some level of problem management processes (i.e. trouble tickets) and reactive (up/down) monitoring in place.

Proactive mode - Mature asset management and change control processes are in place. Significant levels of automation are in place and service delivery performance is being actively monitored. Thresholds have been set and trends are being analyzed so that potential problems can be predicted and prevented before they become service interruptions.

Service mode - IT organization operates as a business, not as overhead. Services are defined, classes of service are established and the cost and pricing for these services is understood and in place. Service delivery quality standards have been documented. Service level availability (SLA) guarantees and the metrics to monitor compliance are in place. Capacity planning and comprehensive, proactive monitoring are standard.

Value mode - This stage is essentially operating IT as a profit motivated business, with everything (business processes/planning) that this entails.


Easy, inexpensive to implement Best Practices

Very few IT organizations are running at level four of the ITSM and those that are most likely are not visiting our little web pages. If you're still spending your time running around dealing with flames shooting out of manhole covers, here are a few quick, easy things you can do to tighten up your environment and maybe get a night's sleep without the duty pager going off at 02:00:

Many IT organizations benefit from having their operations and methodologies reviewed by an independent resource such as Easyrider LAN Pro. We have no political axes to grind and no organizational turf to protect. We are not Resellers so we have no financial incentive to "recommend" (i.e. sell you) expensive technology solutions. We are often invited to perform independent data center reviews and audits specifically because our findings will be unbiased and not self serving. And we pretty much always find at least a few situations that are in need of attention. In many cases, we find a LOT of risks, vulnerabilities and potential disasters that are just waiting to happen. If we can increase your uptime by even one "9", you'll probably agree that our consulting services are well worth the money spent. Even enterprises that are being proactively monitored using professional grade software frequently have issues and problems that the NOC Techs don't see, for whatever reasons. So just imagine what we might find in your network if you DON'T have a professionally staffed NOC in place!

Low hanging fruit - The white papers in this series contain lots of recommendations and ideas where your computing environment might benefit. Our white paper suggestions are intentionally easy and inexpensive to implement. There's really not much sense in spending a pile of dough to deploy major project initiatives if you haven't first gotten the easy stuff out of the way, right? High value initiatives don't always have to be high dollar undertakings.

Application management - If you spend any time at all reviewing firewall logs, you'll notice that port scanning is being done relentlessly. Unless your network is locked down tight, a simple Nessus or even an NMAP scan will provide piles of information about your infrastructure and clues on the best way to exploit vulnerabilities. So the first step is to shut down unneeded services and to block any port queries originating from anyone other than authorized users. The second step is to make sure that you don't provid useful hacking information for services that can't be blocked. For example, a port scan will provide the IP addresses for servers supporting SMTP, POP, IMAP, Apache, IIS and so on. And "out of the box", these applications are configured to provide WAY too much information to curious hackers. Configure your apps so that if someone does a telnet smtp.yourcompany.com 25, the software returns something like "Water Pump Meter #5", not the software, version number and so forth that is responding to the port request. If you tell a hacker that you are running Apache 2.0.48, they automatically know that the server is a *NIX machine and that they should focus any crack attempts on *NIX/Apache vulnerabilities. They may break in with or without this information, but why make it easy for the criminals?

Password management - There are many schools of thought here. Fundamentally, at the core of keeping your network secure is the understanding that the best way to prevent hackers from causing problems is to not let them have any access at all to anything, including information about your equipment. In addition to defending against buffer over-run and other application-type exploits, hackers also seek to gain entry into your domain by exploiting weak user passwords.

Some IT organizations require ridiculously (IMO) complex passwords and require that users change their passwords frequently. In my opinion, this is a mistake because it encourages users to write down their passwords... typically taping then on the bottom of their keyboard or putting them on a piece of paper in the top middle drawer of their desk. This will keep out Third World Country Hackers but not all security threats come from people in far away places.

A better approach might be to require strong but easy to remember passwords. Something like 04July1970 which is my Son's birthday (actually, it's not but it makes a useful example). In this example, I have selected a very difficult to guess but easy to remember password. It is highly unlikley that even a sustained brute force password attack will ever be successful. You also might want to require user login names that would be difficult to guess. For obvious reasons, I would not use my own birthday for this type of password. Joseph.Smith would be a reasonably secure login name. Joe or JoeS probably not so much.

I typically do not enforce changing passwords every few weeks. If a password has not been compromised, I see no value in changing it. You do, however, want to have a written procedure for provisioning new users that can be performed in reverse when they leave the company. One would also not want to use the same password for everything. I trust my ISP but it is just not very smart to use the same password for my ISP account that I use as the root password from my web server. On many Internet web servers, the password you select is readily available for viewing by others in many case. Do not use your company work login password(s) when creating accounts on potentially untrustworthy sites.

I generally review user password strength by running a cracker program against the password file (as root/Administrator). If the cracker program is able to figure out someone's password, I reset it and require the user to change it the next time they log in. Eventually, they will get tired of this and will enter a password that the cracker program cannot guess.

I never allow root logins anywhere except for the console. And the console is typically in a locked, limited access data closet. If nothing else, if something bad happens, there is a user audit trail so that I can see if anyone running as root may have caused the problem. I am also a strong believer in sudo versus root for most non SysAdmin users (e.g. the Webmaster).

In addition, resist all urges to allow applications to run with the root UID. If a Hacker is able to obtain a shell, you want it to be a mortal user shell, not a root shell!

Tighten your file system access rights. Only directories and files that need to be World readable and World writable so be set as such. You don't see this much any more but .hosts, hosts.allow and similar files should not be used. I prefer a "trust no one, trusted by no one" security model.

Subscribing to the CERT bulletin list and implementing kernel and applications patches to plug up vulnerabilities is a no-brainer. However, I tend to be slow to install patches if they only contain new features or if they only fix bugs that I don't care about. Frequently applying patches as soon as they are released can become a never-ending treadmill that can be painful to get off of.

Any more free, easy to implement suggestions? - Lots! But in the interest of keeping this white paper down to a reasonable length we have to pull the plug here. You'll just have to hire us to come in and talk to you about the many other painless things you can do to keep bad people out and to keep services humming along reliably.

Next in the security white paper series:

Firewall White Paper
Virus White Paper
SPAM White Paper
Denial of Service DoS White Paper
Trojan Virus Attacks White Paper
Port Scanning White Paper
Shelfware White Paper



Last modified March 25, 2009
Copyright 1990-2009 Easyrider LAN Pro