One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke
the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.
CitationCasanovas, A. [et al.]. Work in progress: building a distributed generic stress tool for server performance and behavior analysis. A: International Conference on Autonomic and Autonomous Systems. "5th International Conference on Autonomic and Autonomous Systems". Valencia: IARIA, 2009, p. 342-345.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: email@example.com