Presented at ISSS EXPO93, November, 1993, Washington D.C.


The Development of a Proactive Network Security System

Mr. Jeff Humphrey and Dr. Bruce Gabrielson
Kaman Sciences Corp.
Alexandria, VA

in Association with

Naval Research Laboratory

Network Security Issues

ADP Security Groups are quickly becoming a major force in most technical organizations with networked computer systems. The number of connections to on-site LANs (and number of LANs) are expanding, as are the number of sites with gateways to the Internet. Closely following the growth in network connections is the incidence of cracking/hacking and the often expensive cost of cleanup.

Two issues drive the need for protecting an organization's network. Sensitive and proprietary information is often contained in the local area networked computing base on-site. The National Computer Security Act requires personal information be protected from unauthorized access. Government organizations should be in full compliance with this privacy requirement. In addition, these organizations have issued instructions regarding the protection of their sensitive, but unclassified information.

Commercial organizations have similar requirements, especially if they wish to stay in business very long. Trade secrets and market edge information are essential if a business is to remain competitive.

The other problem relates to regular data protection. Down time to get your organization network back on-line, or to simply recover data after a virus attack can run to millions of dollars. Costs can also be high if certain types of data is manipulated to show other than its actual information.

With information resources so important, any organization with a networked computer should require something be done about ADP security protection.

Cost Control in an Expanding Environment

As more ADP computing resources are added, the job of maintaining control goes up. However, budgets to perform security functions are shrinking, and there are limited personnel available to address the multitude of problems. Businesses and Government organizations consider ADP security an overhead cost center, the subject of most primary budget cuts. Although needed, it is increasingly more difficult to hire staff regardless of a growing workload.

The job of just identifying intrusions and network weaknesses gets more difficult as sophisticated problems become much less conspicuous. To address this issue, the necessary training and technical background of security personnel has become increasingly complex.

In an environment with many networks and network types, our own organization has found that we can no longer simply follow guidelines provided by others to tell the ADP security manager for a particular LAN all the problems or fixes possible for his or her machine, or even which known generic network problems might not be directly applicable or related. In most cases there are specific problems for specific applications, each requiring a unique fix.

Simple network administration doesn't work anymore since now we must know when sophisticated attack occurs, what was tampered with, and how to fix the problem in the future. Facing increasing complex threats and the possibility of decreased resources, Kaman Sciences, in association with the Naval Research Laboratory, embarked on an ambitious program of creating a system that would allow both better control over computing resources and increased efficiency in correcting security problems in multi-networked environments.

Identification of Possible Approaches

Since many security restrictions were already imposed, any approaches or changes to the existing networks must be able to meet some pre-specified criteria. What we needed was a multi-facet approach that was both technical network management oriented and also based on our existing formal individual machine security compliance.

A sampling of the types of approaches considered is provided below:

1. Minimize the number of interconnections allowed by users or even isolate our net. This was considered to ba a non-productive approach.

2. Implement COMSEC on all interfaces and stored sensitive data. This could also be a non-productive approach and would increase the users overhead burden when such protection was not warranted.

3. Allow for the use of only Trusted Systems with tightly controlled COMSEC gateways. This approach was expensive and also considered non-productive.

4. Bring in outside experts on interconnections for each existing network type to insure complete security at each stage of development. In addition to being an expensive solution over the long term, the major problem with this approach was even experts only know about what they have been exposed to, and our conditions keep changing.

5. Develop an in-house methodology for quickly and efficiently testing our various systems for security holes. There are initial up front cost and continuing upgrade costs with this approach, so a cost/benefit analysis would be required.

6. Buy a existing or customized security protection package for each network or hire a contractor (higher initial costs plus continuing costs) to develop one. In our case, this approach would require constant upgrading as new conditions evolved.

7. Simply ignoring problems did not satisfy the requirements imposed.

Of the approaches considered, item 5 was determined to be the best for our particular needs. If a successful method was developed, our efficiency would initially increase, we could control our costs, and we could also continue to maintain control over our network security needs by allowing our basic system to evolve as our needs also evolved.

Proactive Testing to the Rescue

The question we faced was what methodology could work in an automated or semi-automated network environment to test our networks. When we considered both active and passive security techniques that could be applied to networks, one method appeared most likely to meet our needs. We decided on Proactive Testing because:

1. Needed to find and fix problems in own systems quickly.

2. The technique could support system growth without additional staffing.

3. It allows for addressing emerging technologies once the initial shell is in-place. Later versions could become more sophisticated as the technology becomes more sophisticated.

4. Implementation would allow the application of judgement criteria to before and after conditions and incidence.

5. The capability could easily be built in to make the system very useful for audits.

Specifics of Proactive/Reactive Approaches

Security software generally falls into one of two categories, reactive[1] or proactive[2]. The most common and easiest method to use is reactive testing, while proactive software is just beginning to emerge as actual applications products.

Reactive computer security is often software set up to monitor traffic and connections, keep audit trails, and generally help `react' to cracking attempts. The goal of reactive security is to give the system administrator enough recorded or real-time information to clean up after a system attack or avoid it altogether. Examples of reactive security services include network sniffers, C2 audit trails, and network daemon[3] connection loggers.

Proactive security actively deals with the conditions and environment the computer system is operating in. A proactive approach to local system security would check default setup of system files, possibly try to crack encrypted user passwords, check the setup of user accounts, etc. A second proactive approach checks remote system security. To check remote systems, the proactive software attacks the computer system from the outside (as a system cracker would) in order to guarantee that the system is free of known security holes. This also has the effect of checking the network's intrusion detection capabilities.

Remote proactive security testing is a process whereby computer security personnel attempt to gain access to remote systems under their control. The process is much more involved than simply a sweep for information. The purpose is to uncover known system problems that were left uncorrected or identify new problems that were previously unknown. In using a proactive approach, the security office runs a series of tests against remote machines with the purpose of discovering known holes that have not been corrected.

Proactive testing benefits the system administrator in that it gives him information he may not have discovered by looking at system files. Computer systems look very different from the outside, and a new perspective goes a long way toward clearing up problems, especially when that perspective is the same as the enemies. An effective proactive testing program also gives the remote system administrator a feeling of security when testing is no longer effective against his or her machines.

The proactive security methodology is in reality an overall program which encompasses many aspects of security in addition to network testing. The intent is to develop and apply a total network security program which will in turn lead to a well protected network. A typical proactive program would include the areas identified in Figure 1.


Figure 1 - Some Major Aspects of a Good Proactive Computer Security Program

Below are listed two typical examples of a proactive test scenario.

1) A system administrator downloads a public domain password cracker for his UNIX system. The goal of the password cracker is to break as many encrypted passwords as it can so that the system administrator can have the machine's users change easily broken ones.

2) A system administrator wishes to check his filesystem for possible security holes. He or she downloads a public domain security program which then checks for obvious (but often hard to find visually) file permission mistakes.

Characteristics of the Ice-Pick Program

The proactive package developed by Mr. Jeff Humphrey of Kaman Sciences to meet NRL's demanding requirements is called Ice-Pick. The Ice-Pick package is a window driven program that provides a multi-layered approach to network testing. As previously mentioned, the project was initiated to increase staff efficiency in isolating and correcting network vulnerabilities. The original goal of Ice-Pick was to develop an interactive cracker tool that could be used to identify frequently exploited security problems present on well known UNIX based operating systems.

Both passive and active testing is used to collect very specific information about each of the computer systems connected to the network under test. This information is used to determine the vulnerability of a network to a directed attack.

The Ice-Pick interface is written in "X" on a UNIX-based workstation. This interface can be used to map entire networks, machines, gateways, and the links between them. This format allows a quick point-and-shoot method of testing for over 13 different major test categories. Target identification and verification is done through architecturally specific signatures present in information returned from Ice-Pick queries. A typical Ice-Pick type attack scenario is shown in Figure 2.


Figure 2 - Typical Ice-Pick Attack Scenario

System targeting: This is the process by which the program allows a user to gather information about potential targets on the network. There are a number of methods which can be used to perform this function.

Adding a single host: This method of targeting allows the user of the Ice-Pick program to add a single machine to the network mapping interface by either internet number or by internet name.

Adding hosts from a preformatted file: This method allows the user to input potential targets into the interface by reading in a preformatted list of targets. This list can be a file similar to the UNIX system file (/etc/hosts) or can be the output of a command like (nslookup).

Zone transfer domain from nameserver: This method of targeting allows the user to enter a host name and a domain name which the program will use to contact a remote nameserver on the network. Once contacted, all known hosts on the remote machine are transferred to the Ice-Pick program and integrated into the mapping interface.

Network scan tool: The Scan Tool is an integrated part of the Ice-Pick interface which allows a user to send out hard ECHO packets to ranges of numbers on a remote network in order to find machines which are 'alive'. This approach is somewhat time consuming as well as being rather network intensive, so it is normally used as a last resort to find targetable systems. This method is mostly useful for finding hard to locate hosts inside of a Class-C address space.

There is one more way in which hosts are added to the network mapping interface, by way of the mapping software itself. By default, when a new system or address is added to the mapping interface, it's location on the network is automatically verified by a tracing routine embedded in the code itself. This tracing algorithm often find new hosts (gateways) which are between the tester and the target system. These systems are added to the available pool of testable systems automatically and need not be known by the user of the Ice-Pick program.

Test selection is the most important part of any remotely proactive tool. This is the phase of program operation in which the user decides what security holes are going to be tested on the remote system or network, and in which order they will be used. Inside of the Ice-Pick interface tests are selected by using the mouse to move test blocks into a row in the order they are to be executed. The priority is up to the program's user.

Among the many tests currently available in the Ice-Pick user's arsenal include:

TFTP (Trivial File Transfer Protocol):
Mountable Filesystems:
X-Display:
REXD (Remote Execution Daemon):
Finger:
Sendmail:

There are also a number of tests which require user participation in their execution. One such test is the 'guess' test, in which the user can add possible 'username/password' combinations to the suite of couples to be guessed at.

Remote system testing is the phase in which the user watches the program as it fires tests off at remote systems and networks. Not much participation is required by the user at this point, but a number of visual cues are sent to the interface to facilitate a 'video game' atmosphere in which the Ice-Pick user can simply watch machines get pounded on. Testing can, of course, be stopped at any time.

Report generation is one of the most important phases. During this phase of program execution, after testing has completed, the user has options as to how he/she wishes to view the testing results. The following options are allowed ...

Simple viewing: The user looks at results with the interface but does not actually create a document describing those results. Network report generation: The user allows the program to generate a report with statistical data for the network as well as summaries of testing results against all hosts on that network. System report generation: The user allows the tool to create a sperate report for each system on the network (normally e-mailed to system administrators from the person in charge of testing in a real facility).

Is the Program Successful?

With Ice-Pick in hand, the system manager knows how vulnerable his system is. In effect, he can launch an offensive attack against one or more of the computer systems on the network, just as any outside hacker could. The difference is that once the network is penetrated, a report on vulnerabilities, rather then the loss of data or the insertion of a virus or other malicious code is the result.

Ice-Pick has quickly evolved from its original form and continues to increase its capabilities as a network based security tool. The addition of new tests to the program is unavoidable as ever more complex problems are identified within existing and emerging operating systems on a regular basis.

Ice-Pick has now become more than just a successful test tool at NRL. The overall test program has allowed site security personnel to gain control over their organizations network security issues by centralizing system security testing, greatly increasing our staff efficiency, and reducing the costs of both oversight and recovery.

[1] Reactive Security: Reactive computer security deals with who- did-what-when issues as related to local and remote system activity.

[2] Proactive Security: Proactive computer security addresses the need to check the current setup of a system in order to verify that system is secure. A proactive package would look at default conditions as they currently exist on the system and identify problem areas.

[3] Daemon: A program running in the background (or activated by interrupt) which provides a network based service to remote users.