WikiPrint - from Polar Technologies

Design Notes

This document serves the following purposes:

==Overview== The current experimental testbed services primarily focus on providing experimenters access to testbed resources with little or no help to configure, correctly execute, and systematically analyze the experiment data and artifacts. Additionally, while it is well know that experimentation is inherently iterative, there are limited mechanisms to integrate and cumulatively build upon experimentation assets and artifacts during the configure-execute-analyze phases of the experiment lifecyle.

The eclipse ELM plug-in provides an integrated environment with a (large) collection of tools and workbenches to support and manage artifacts from all three phases on the experiment life cycle. Each workbench or perspective integrates several tools to support a specific experimentation activity. It provide a consistent interface for easy invocation of tools and tool chains along with access to data repositories to store and recall artifacts in a uniform way. For example, the topology perspective allows the experimenter to define a physical topology by merging topology elements based on the specified constraints and then validate the resultant topology.

The key capabilities of the ELM plug-in include:

We define a scenario to encompass related experiments used to explore a scientific inquiry. The scenario explicitly couples the experimenter's intent with the apparatus to create a series of related experiment trials. The experimenter's intent is captured as workflows and invariants. A workflow is a sequence of interdependent actions or steps and invariants are properties of an experiment that should remain unchanged throughout the lifecycle. The apparatus on the other hand, includes the topology and services that are instantiated on the testbed during the execution phase. Separation of experimentation intent from the apparatus also enables experiment portability where the underlying apparatus could heterogeneous and abstract, virtualized experiment elements.

Steps for creating an experiment

Given the above ELM environment, the basic process of creating a scenario consists of the following steps in a spiral: Composition Phase

Execution Phase

Analysis Phase

Integration with DETER Technologies

The diagram below describes how ELM, fedd, SEER and CEDL interact

|

ELM --> CEDL --> fedd --> SEER

| ---------------- |

(Place holder: need to update)

August Review Demo

Suppose my intent is to study the response time of an intrusion detection system. I design a scenario to connect attacker components to the IDS component with an internet-cloud component. The ids component is then connected to a service component with a wan-cloud component as shown below.

[[File:Attacker-ids.png]]

I am interested in exploring the effects of the attacker on response time of the IDS and not interested in any other aspect of the experiment. The ELM framework should then enable me, the experimenter to solely focus on creating a battery of experimentation trials by varying the number of attacker components, the attacker model, the model parameters,etc. All other aspects of the experiment should be defined, configured, controlled, and monitored based on standard experimentation methodologies and practices.

Each component that affects the response time of the IDS and has several alternatives is called a factor. In the above example, there are four factors: attacker type, internet-cloud type, wan-cloud type and service type. The models that a factor can assume is called a level. Thus the attacker type has two levels: volume attack and stealth attack. Each level can be further parameterized to give additional sub-levels, for example, low-volume vs high-volume attacks.

Factors whose effects need to be quantified are called primary factors, for example, in the above study, we are interested in quantifying the effects of attack type. All other factors are secondary, and we are not interested in exploring and quantifying currently.

Hence the experiment design tool consists of defining individual trials varying each factor and level (and possibly also trial repetitions for statistical significance) to create a battery of experiment trials to explore the every possible combination of all levels and all primary factors.