Taking a loadtest measurement with s_aturn

The loadtest driver of the s_aturn-system generates a reproducible load in a black box procedure according to DIN 66273/ ISO 14756.
The test object is accessed via TCP/IP.

DIN 66273/ ISO 14756, the basis of the s_aturn technology

Description of the method and the evaluation criteria of the DIN 66273/ISO 14756

The load is based

on the reproduction of transactions

  • in complex user structures
  • taking into account time-classes

That means that the actions of a user can be divided into tasks, which are then reproduced according to the number of users.To simulate a load the number of users wird die Benutzerzahl is increased progressively up to the limiting load.

Evaluation of measurements

A special feature of the DIN and s_aturn is that the results of the measurements can be evaluated. According to DIN an evaluation is carried out implicitly considering different criteria with a yes/no statement as result.

The theoretical reference system is calculated on the basis of the time target. This corresponds to what the user expects. The measurement results are compared with this reference system.

Definition of a theoretical reference system

by determination of a profile of requirements. This includes tasks and the appertaining time targets.

With the think times waiting times of a user are simulated (e.g. reading a text), before the next action will be carried out. Example: Think time for task type flight booking: 30 sec

The demand of response time is the time, that a user is willing to wait for a system reaction.
Here it is specified how long the processing of a task should be allowed to take. Thereby sensible demands of time are to prescribed: during a chat with a customer in a travel agency nobody worries about a processing time of a reservation of 5 to 10 secunds, whereas a processing time of one hundredth of a second is of little practical use. The demands of response times are predefined in case of DIN 66273 in form of schemata of time categories, in which it is specified, what portion of the response times shall not exceed which threshold.

Example: Demand of response times for task type flight booking:

  • 40 % no longer than 5 sec
  • 80 % no longer than 15 sec
  • 100 % no longer than 30 sec

The number  of time categories, which are used in a demand of response times (3 in the example), users have the freedom of choice - depending on their individual circumstances. Not every user of a user group should perform the same series of tasks with with in each case same thinking times. This would be correspond to a totally unrealistic synchronization of the users. Rather the tasks are arranged randomly from a defined store. Similarly think times are determined by a random generator - given an average and a dispersion. Specific tasks can be performed only in a certain sequence.

The evaluation of the measured data during the measurement can now be carried out against the reference system:

On the basis of the planned data and the measured data the evaluation criteria of the DIN are calculated:

  • The throughput L1 describes the number of processed orders per time unit (specified think time + specified response time)
  • The processing time L2 specifies ratio between the average response time and the defined reference
  • The on schedule behaviour L3 specifies ratio between the portion of tasks completed in the scheduled deadlines and the number of given tasks

The DIN-evaluation criteria should not be lower than 1.0. A value of 1.0 means that the system does not correspond to the demands, that are set in the demands of response time, for the proposed task type.

To determine the load limit of a system, a test series is conducted in which the number of simulated users is increased until at least one of the DIN-evaluation criteria decreases below 1.0.

In the figure to the right side the course of the DIN-evaluation criteria, depending on the number of users, is shown in an example. The curves 1, 2 and 3 show the progression of 3 different task types. The DIN-evaluation parameter, that decreases below 1 with the minimum number of users, is the response time rating L2 of task type 3 (with around 270 users).

Where exactly the evaluation criteria do not conform to the standard, it can be seen from the graphics,generated for the evaluation of the measurements: Results s_aturn

Besides genuine evaluation criteria a DIN evaluation contains a range of control values.

Performance analysis under load with s_aturn

  • Comprehensive potentials to configure individual measurements with our Graphical User Interface s_qusi
  • The behaviour of the virtual users corresponds to real users to the application
  • The efficient implementation of s_aturn allows the generation of very high load with low resource usage
  • Measurements of 10.000 and more competing users
  • Compact results of a load test with s_aturn by the rating of DIN values.
  • Presenting of curves of response times, response time histograms and response time sum histograms
  • Implementation of series of measurement - evaluations and distributed measurements between master and slaves
  • Automatic generation of measuring reports

If the measurement is based on an adequate reference model, the monitoring data provide indications to systemic causes, such as bottlenecks in system resources, processor loads and memory space.

With our Graphical User Interface s_qusi the measurement can be configured, executed and evaluated in a convenient manner. For configuring of individual measurements it is necessary, when recording, to divide the actions into logical steps, which correspond to to the subsequent s_aturn task types. Therewith task chains (sequences of tasks) and think times are defined. For the different weight of individual transactions various user types can be defined, free to combine. They are processed in sequences of task-types and cycles. Statistics about the time classes show, what percentage of the tasks need to be dealt with in what time.

Execution of a s_aturn measurement

From the records of a user dialog created by the customer or Zott+Co GmbH s_aturn generates a load under laboratory conditions, which will be advanced directly to your system. Modifications to the test object itself remain superfluous, since the test driver operates according to a black-box-procedure. The testing phase takes place under realistic conditions, so that the results gained can be transferred to the normal operation with real users.

The load test driver simulates user with due regard to the think times configured with a random behavior. In this way, even thousands of users with different application profiles can be simulated easily. Individual inputs such as names uad passwords are substituted by the load test driver at runtime. The measurement with s_aturn tackles various points of the test object. As well as the response times s_aturn also ascertains a profile of your system under load, that provides important details about the internals of the test object for subsequent evaluation throughout the entire test.

The s_aturn Performancemonitor collects, in addition to the load test, metrics, which represent the test system.

Validation of the measurement results

The validation of the measurement results is conducted by a well-defined procedure for the rating of performance, that is described in DIN/ISO. That leads to objective ratios, which are made available in a detailled evaluation of the measurement. If desired, a measurement report can be supplied and adapted to specific requirements.