Presented at IA-COP Information Assurance Workshop, Advanced Information Technology Joint Program Office, McLean, VA, November 2-4, 1999

Enhancing Near Real-time Security Using an Intelligent Agent Approach

Dr. Bruce C. Gabrielson, NCE
Bruce.c.Gabrielson@saic.com

Mr. Leonid Kunin
Leonid.Kunin@saic.com

Center For Information Security Technology
Science Applications International Corporation
Columbia, MD

Abstract

This paper describes an approach using intelligent agents to enhance security in large networks. In particular, the security of large networks is difficult to manage in the face of changing technologies. Using an agency composed of agents to address near real-time "snapshots" of configuration changes, remote network testing, and remote automated configuration tools, a full near teal-time detection and reaction capability is possible. SAIC's Trover represents a higher level agent that may achieve this capability.

Introduction

Science Applications International Corporation (SAIC) is a diversified high-technology research and engineering company based in San Diego, California. SAIC offers a broad range of expertise in technology development and analysis, computer system development and integration, technical support services, and computer hardware and software products.

Significant research efforts have been put forth by SAIC and others to identify viable methods for efficient and efficacious security related to information systems. New and innovative measures are required to enable classified clients to use the Internet and still protect critical data, regulate dissemination, guard against malicious behavior, and ensure the integrity of the system without impeding normal activity.

This paper addresses a near real-time security improvement approach related to enterprise-wide computing using intelligent agents. Specifically, the paper will address:

Intelligent Agent Applications

Intelligent agent based Host Security Engineering Tools are aides to the system integrator, helping to ensure that integrated systems are security hardened more consistently. SAIC's approach to supervise and control IA tools when employed to protect the enterprise wide environment involves the use of Trover, a fast discovery and evaluation agent that can serve as the initial framework for various application objects.

Frameworks can provide various ways for clients and servers to communicate. The framework supports the foundation for common services such as data collection, information storage, event management, security, user interface, and task automation to organize and integrate individual probes, monitors, and sensors in the enterprise-computing environment. The framework also provides the necessary application programming interfaces and services necessary to partition and distribute applications.

With good communications, frameworks can integrate Commercial-Off-the-Shelf (COTS) and Government-Off-the-Shelf (GOTS) tools for vulnerability assessment, audit monitoring, intrusion detection, and malicious code detection and eradication and provide the following security services:

What Are Intelligent Agents

An intelligent agent is a system that can perform a task based on intelligence learned or rules provided. They are able to independently evaluate choices without and human interaction. When several agents are put together, they form an agency, capable of a combined range of actions. The agency picks the right agent needed to perform a specific task, using the network itself to do the processing. In other words, the agent represents the user to select and complete a required task through ruled-based criteria while using and interacting with other programs and data.

Agents are a natural extension for complementing and evolving technologies. In the future as these other technologies evolve, so to will the corresponding capability of associated agents to deliver these technologies. Some supporting and complementing technologies include:

Agents consist of a common architecture shown in Figure 1 below. The knowledge base contains the knowledge that has been generated as well as rules that are being followed. Libraries contain information the agent has identified. Application objects are the resources available to the agent (test tools in our case). The adapter serves as the standardized interface for the tools. The views are basically who or what the agent is capable of delivering to its user.

Figure 1 - Common Agent Architecture

Information Assurance Problems

In the real world there are practical problems with information assurance. Technology advances result in equipment and configuration changes that are increasingly difficult to follow in real-time. Available test tools that support information assurance are usually intended to either monitor for attacks or to proactively attack network vulnerabilities. These tools normally involve human intervention, are comprehensive to some degree, take time to run, and can generate large data files. The big problem with current information assurance approaches is getting the knowledge needed to the right individual in order to properly carry out the required range of protection activities when attacks or even initial vulnerabilities are first detected.

The Security Agent Solution

Any security agent that can solve the information assurance problem set will have needs based on what job it is doing. The primary needs of a security agent relate to information, reaction and control:

 

Threshold determinations, chain of command, IW threat condition, extent of problem /operational condition/course of reporting action in response to an identified threat (immediate vs. controlled), control decisions, local/remote react tools initiation capability.

Tied to the lowest level in order to provide immediate and direct control.

The agent must be in a position to quickly detect or re-detect subsequent vulnerability information, and react with vulnerability responses or send requests to higher level authority.

Standard operating procedures, firewall information, a network map (routers, hubs, etc.), and threat assessments.

Reaction and interpretation represent the most difficult challenge. For the security agent to work, it needs some rules to follow depending on where in the infrastructure the agent will operate and what we want it to do. We want the highest level agents to interpret only what is going on at their own level or at levels below them so they can follow reaction rules that are appropriate for their level. Without this level of distraction, any agent could be overloaded with too much information. Controls must therefore be in place that can support the reaction rules that have been decided on at the proper level of interpretation.

Another related problem is that the distributed, hierarchical control/reporting strategy reflects that an element's authority to directly command/control the examination, monitoring and analysis of (lower-level) network assets may diminish with its degree of remoteness in the organizational hierarchy. Therefore, achieving this objective would seem to imply that a two-tier approach might be necessary, one level which would manage assets directly, and one intended to evaluate the situational responses from lower level assets.

Intelligent agents are a natural approach to solving this problem using their remote programming capability. A common agency design might be developed that can change its structure depending on where it is located and who it needs to talk to in the hierarchy. Using an agent's remote programming (RP) capability for computer-to-computer communications, the agent approach is particularly suited to help filter and take automatic actions at higher levels of abstraction, in addition to detecting and reacting to local patterns in system behavior. The agent framework can include a distributed knowledge base which acts as a repository for reference knowledge on vulnerabilities, attack techniques/signatures, countermeasures, as well as information collected concerning detected intrusions, vulnerabilities, and corrupted software/information.

Data Mining

One of the problems with generating large amounts of data is how to interpret what is collected. Data mining is an emerging means of finding intelligence when too much information exists. When an intelligent decision is needed, data mining allows automated acquisition, generation and exploitation of knowledge from large volumes of heterogeneous information, including multiple text, video, and audio sources. From available internal data sources and open sources such as the Internet and published works, meta data and meta knowledge can be captured, smart indexing can be performed using heuristics, and knowledge bases can be created that enable concept-based searches through natural language processing. The goal is to be able to turn agents loose to find the necessary information or patterns needed to gain knowledge about the environment or about some specific security related condition. In the case of security needs, the ability to data mine would eventually become one more significant tool in the security framework.

Figure 2 represents the agent structure for the acquisition of knowledge from wherever it may be located. Data can exist in many forms, each of which requires the development of a special agent designed to find, acquire, interpret and extract the data into a useable and common format.

Figure 2 - Knowledge Acquisition Structure

A general, usable system that provides associative access to all this data must meet certain criteria: It must offer a single point of contact that provides uniform access to all the information available on a particular network (such as audit logs) or on the Web. It must have reasonable performance in processing user queries. It must address issues of scalability in terms of storage requirements at any of the system's distributed components. It must also address scalability issues in terms of network communications, by efficiently and selectively accessing the large and rapidly growing number of information servers. Finally, it must help the user locate relevant information. In order to do this, the system should provide recommendations for refining user queries and help the user manage and understand the complexity of the information space.

InfoSleuth

InfoSleuth is a multi-use tool that allows for complex like queries to run against heterogeneous sources which may be located on disparate operating systems. It consists of an agent-based infrastructure that can be used to deploy agents for information gathering and analysis over diverse and dynamic networks of multimedia information sources. It may be viewed as a set of loosely inter-operating, though cooperating, active processes distributed across a network. Agent based data mining approaches, such as used by InfoSleuth, have the potential to allow capturing and analyzing significant information about a particular condition regardless of where it exists or how it is initially detected.

Detecting Triggers

Any network characteristics that could indicate a security problem might exist are called triggering events. A known modification to any portion of the system mission, environment, or architecture that affects the system's overall security posture would be considered a triggering event. The trigger events could include:

Note that while some of the above trigger events obviously indicate a potential security problem requiring immediate action, others might simply indicate that a network change has occurred. Testing experience provides a good feel for the types and severity of changes that may cause a vulnerability problem. Part of the knowledge a security agent should possess is the ability to first identify the trigger, and then the ability to react to the trigger in a way consistent with the threat potential.

Agency Design Objectives

The following design objectives should provide guidance in developing a realistically useable security agency based framework.

The agent will interpret policy and response/direct commands initially for its lowest level of planned deployment.

Include the capability for manual controls to implement react policy/process.

The agency's decision process will include an interpreter that provides the ability to direct multiple reactions/responses from various for physical/logical interfaces. Should the process require additional information, the ability to mine of heterogeneous information sources for additional intelligence.

Control mechanisms should be considered in the interpreter agent that would allow secure (positive) identification of users plus would provide configuration information (i.e. higher level control) for eventual higher level operation.

SAIC's Trover

To be effective in a large, ever changing environment, the primary agent needs to be both fast and focused. It should interpret the initial conditions and then using its framework of support agents and tools, be able to direct more comprehensive responses. Trover, SAIC's security agent, was designed to achieve this objective. The NT based version can evaluate 250 machines in approximately 290 seconds, and it has the interfacing structure to allow it to communicate quickly with other agents. Trover's configuration is depicted in Figure 3.


Figure 3 - Trover Components

Trover currently uses a manual control for set-up to determine its initial network boundary conditions. Once conditions are established, the agent runs in the background performing discovery functions and updating its knowledge base. When an anomaly is identified, Trevor's Microsoft Management Console based interface identifies the particular system involved and allows for more detailed data review or additional inquiries to take place. Actual generated results are kept simple using XML documents.

Trover’s Object Model

As shown in Figure 4, Trover is a COM/DCOM object that implements IProperty interface for controlling the discovery settings (like the number of threads, timeouts, TCP/IP port lists, etc.) and the ICommand interface to control the multithreaded asynchronous discovery process. Results are reported as an XML document with optional XSL post-processing to allow for customizable XSL/VB-script/J-script host or network level analysis.


Figure 4 - Trover's Internal Structure

Trover's User Interface Component

Trover user interface component (Trover UI) is implemented as a Microsoft Management Console snap-in. This makes the Trover UI instantly familiar to network administrators and allows for easy delegation. The scope pane of the snap-in represents the hierarchy of Hosts. Each host has a list of categories being discovered. The result pane view depends on type of the node selected in the scope pane. When the Trover (root) node is selected, the result pane displays the current analysis results summary for all hosts. When the Host node is selected, the result pane displays per per-property analysis results for the host. When the Property of the host node is selected, the result pane displays detailed information about the ports involved in the analysis of this property.

Developing an Advanced Trover-based Agency for React and Detect

Trover has the capability built-in to interface with other tools and agents, such as InfoSleuth, so more advanced discoveries or control can take place. It also can be configured to interface with configuration tools such at the NTSecwiz and with network automated vulnerability test tools such as the Vulnerability Tool Kit (VTK) and CyberCop.

The ultimate objective of a near real-time network security agency would be to have the agent act independently, based on pre-defined or even newly discovered conditions, to react and protect against a discovered threat. Future research will be focused on enhancing the interfacing and control capabilities for both automated network configuration and test tools, and well as enhancing the reporting capabilities of Trevor. The intent of this effort will be to develop the independent detect and react security agency intended for broad application to hierarical large-scale networks.

Conclusion

The common problem with any higher level agent, including those identified herein, is a common and robust protocol that each agent or agency can use communicate with all other agents in a manner that supports quick responses. Trover was designed to support both flexibility and near real-time discoveries. This quick response capability for ever larger network conditions, when coupled with huteristic data mining capabilities, enables the agency approach to provide an initial solution to the existing real world problems associated with information security. It also provides the foundation for further evolution based on future technology advances.